WO2021175040A1 - Procédé de traitement vidéo et dispositif associé - Google Patents

Procédé de traitement vidéo et dispositif associé Download PDF

Info

Publication number
WO2021175040A1
WO2021175040A1 PCT/CN2021/073333 CN2021073333W WO2021175040A1 WO 2021175040 A1 WO2021175040 A1 WO 2021175040A1 CN 2021073333 W CN2021073333 W CN 2021073333W WO 2021175040 A1 WO2021175040 A1 WO 2021175040A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
data
feature
videos
frame
Prior art date
Application number
PCT/CN2021/073333
Other languages
English (en)
Chinese (zh)
Inventor
尹康
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021175040A1 publication Critical patent/WO2021175040A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification

Definitions

  • This application relates to the technical field of data deduplication, in particular to a video processing method and related devices.
  • the current common video deduplication algorithm is based on key point matching to remove duplication, but the process of using key points to extract image features is too cumbersome, and clustering algorithms such as k-means used in feature matching require manual pre-setting of the number of categories Such parameters cannot guarantee the accuracy of the final de-duplication processing.
  • this application proposes a video processing method and related devices, which can accurately cluster the repeated videos in the video data set through an efficient feature extraction algorithm, and then de-duplicate the clustered repeated videos, which greatly improves The accuracy of video deduplication.
  • the first aspect of the embodiments of the present application provides a video processing method, including:
  • a second aspect of the embodiments of the present application provides a video processing device.
  • the device includes a processing unit and a communication unit, wherein:
  • the processing unit is configured to extract N video feature data of N videos included in the video data set, where N is a positive integer; obtain the matching degree data of every two video feature data between the N video feature data; based on The matching degree data divides the N videos into M video clusters, where M is a positive integer less than or equal to N; perform deduplication processing on the M video clusters one by one based on a preset deduplication rule To obtain a deduplicated video data set, where the deduplicated video data set includes M videos.
  • the third aspect of the embodiments of the present application provides an electronic device, including an application processor, a communication interface, and a memory.
  • the application processor, the communication interface, and the memory are connected to each other.
  • the memory is used to store a computer program.
  • the computer program includes program instructions, and the application processor is configured to invoke the program instructions to execute all or part of the steps of the method described in the first aspect of the embodiments of the present application.
  • the fourth aspect of the embodiments of the present application provides a computer storage medium, the computer storage medium stores a computer program, the computer program includes program instructions, and when executed by a processor, the program instructions cause the processor to execute such as All or part of the steps of the method described in the first aspect of the embodiments of the present application.
  • the fifth aspect of the embodiments of the present application provides a computer program product, wherein the above-mentioned computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the above-mentioned computer program is operable to cause a computer to execute the computer program as in the embodiment of the present application. Part or all of the steps described in any method of the first aspect.
  • the computer program product may be a software installation package.
  • N video feature data of N videos included in the video data set are extracted, where N is a positive integer; then, the value of every two video feature data between the N video feature data is obtained.
  • Matching degree data then, based on the matching degree data, the N videos are divided into M video clusters, where M is a positive integer less than or equal to N; finally, one pair of the M is paired based on a preset deduplication rule
  • Deduplication processing is performed on two video clusters to obtain a deduplicated video data set, and the deduplicated video data set includes M videos.
  • the repeated videos in the video data set can be accurately clustered through an efficient feature extraction algorithm, and then the clustered repeated videos can be deduplicated, which greatly improves the accuracy of the video deduplication.
  • FIG. 1 is a system architecture diagram of a video processing method provided by an embodiment of this application.
  • FIG. 2 is a schematic flowchart of a video processing method provided by an embodiment of this application.
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 4 is a block diagram of functional units of a video processing device provided by an embodiment of the application.
  • the electronic devices involved in the embodiments of the application may be electronic devices with communication capabilities.
  • the electronic devices may include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices, computing devices, or other devices connected to wireless modems.
  • Processing equipment as well as various forms of user equipment (User Equipment, UE), mobile station (Mobile Station, MS), terminal equipment (terminal device), and so on.
  • UE User Equipment
  • MS Mobile Station
  • terminal device terminal device
  • Figure 1 is a system architecture diagram of a video processing method provided by an embodiment of the application, including a video acquisition module 110, a matching module 120, a classification module 130, and a deduplication processing module 140.
  • the video acquisition module 110, the matching module 120, The classification module 130 and the deduplication processing module 140 are connected to each other.
  • the video acquisition module 110 may acquire a video data set composed of videos to be processed, and send the video data set to the matching module 120, and the matching module 120
  • the video to be processed in the received video data set may be matched, and the matching result may be sent to the classification module 130.
  • the classification module 130 may classify the to-be-processed video according to the matching result to obtain multiple videos. Clustering, each video cluster is one video or multiple repeated videos, and finally the deduplication processing module 140 performs deduplication processing on each video cluster to obtain a deduplicated video data set, and complete the video Steps to de-duplication.
  • the training data of the neural network model may have a large amount of duplicate data. Using all training data to train the model is not efficient and will cause the accuracy of the neural model to decrease. Therefore, a large amount of training data is deduplicated and automatically processed. It is very important to select training data with better training effects.
  • the system architecture in the embodiment of the present application can be applied to a scenario of screening training data of a neural network model related to video processing.
  • the repeated videos in the video data set can be accurately clustered through an efficient feature extraction algorithm, and then the clustered repeated videos can be deduplicated, which greatly improves the accuracy of video deduplication.
  • FIG. 2 is a schematic flowchart of a video processing method provided by an embodiment of the present application, which specifically includes the following steps:
  • Step 201 Extract N video feature data of N videos included in the video data set.
  • the above-mentioned video data set is a set of N to-be-processed videos, and N is any positive integer, and each video can be processed to extract the video feature data corresponding to each video.
  • N is any positive integer
  • each video can be processed to extract the video feature data corresponding to each video.
  • the following is a set of any video The video feature data extraction step is explained.
  • the feature vector extraction steps of one frame of image data are described in detail.
  • the above single frame of image data is a color image, including three color channels of red (Red), green (Green), and blue (Blue).
  • the single-frame image is converted into a single-channel gray-scale image, and the size of the converted gray-scale image is normalized to a size of 32 ⁇ 32 pixels through a bilinear interpolation algorithm to improve the extraction efficiency, and then a discrete cosine transform (Discrete cosine transform) is performed.
  • the 64-dimensional vector obtained by the above-mentioned binary image flattening is used as the feature vector of the frame of image data.
  • each frame of image data is processed to obtain the feature vector corresponding to each frame of image.
  • the video feature data corresponding to the video can be obtained based on the aforementioned feature vector.
  • the above-mentioned video feature data may be a feature sequence
  • the above-mentioned one feature sequence may be understood as a collection of all feature vectors corresponding to each frame of image data of a video, and the feature vector of each frame of the video can be graded.
  • the above different videos correspond to different feature sequences.
  • the above feature vectors can be down-sampled based on different application scenarios, that is, a feature vector cascade is extracted every 2 frames, 4 frames, etc. to obtain the feature sequence.
  • the number of video frames is different and its corresponding features
  • the length of the sequence may also be different.
  • the above-mentioned video feature data may be a video feature vector
  • the above-mentioned video feature vector is a multi-dimensional vector composed of image feature vectors of each frame of image.
  • the size of the converted gray image is normalized to a size of 32 ⁇ 32 pixels through the bilinear interpolation algorithm, and then the discrete cosine transform (DCT) is performed to obtain 32 ⁇ 32 coefficient matrix, and then select 64 coefficients in the 8 ⁇ 8 area at the upper left corner of each coefficient matrix to perform special quantization to obtain a special binary image.
  • DCT discrete cosine transform
  • the 64-dimensional special vector obtained by flattening all the special binary images after the above special quantization is superimposed, and finally the superimposed 64-dimensional special vector is subjected to the above-mentioned ordinary quantization to generate the above-mentioned video feature vector, and the above-mentioned video feature vector can reflect the corresponding The content information of the video.
  • N video feature data of N videos included in the video data set By extracting N video feature data of N videos included in the video data set, two types of video feature data can be extracted in two ways, which can cope with multiple video processing scenarios, and greatly improves the flexibility of subsequent video processing.
  • Step 202 Obtain the matching degree data of every two pieces of video feature data among the N pieces of video feature data.
  • the above-mentioned matching degree data represents the similarity between every two video characteristic data in the N pieces of video feature data, and the above-mentioned matching degree data may be equivalent to the similarity between every two videos in the N pieces of video.
  • the function calculates the length of the longest common subsequence between every two feature sequences in N feature sequences.
  • N videos v there are N videos v
  • the above m and n may be the same or different .
  • the Manhattan distance (Manhattan Distance) is calculated, and the Manhattan distance between every two video feature vectors is used as the above-mentioned matching degree data.
  • the specific Manhattan distance calculation step can be The existing algorithm is used, and will not be repeated here.
  • Step 203 Divide the N videos into M video clusters based on the matching degree data.
  • each video cluster can include at least one video, that is, multiple videos with repeated content will be classified into the same video cluster, and there is no video with repeated content.
  • a single video can form a video cluster by itself.
  • a preset length threshold can be set. If the longest common subsequence of any two videos is greater than the preset length threshold, it means the longest common subsequence.
  • the corresponding two videos are repeated video sets, and each of the N videos needs to be matched with other videos pairwise to obtain the length of the corresponding longest common subsequence.
  • the above-mentioned output video cluster cluster set C includes M video clusters, and the above-mentioned N-dimensional flag vector is used to determine whether the video has been added to a certain video cluster, if a certain video cluster has been added Cluster, it is not necessary to judge again whether the longest common subsequence between it and other videos is greater than the above-mentioned preset length threshold.
  • the length of the longest common subsequence between the first video and the second video is greater than the aforementioned preset length threshold.
  • the length of the longest common subsequence is greater than the above preset length threshold, it means that the first video and the second video are repeated video sets, and the first video and the second video need to be divided into the first video cluster.
  • the first video and the second video are different videos and do not belong to the same video cluster; Then determine in turn whether the length of the longest common subsequence between the first video and the third video, and the fourth video to the Nth video is greater than the preset length threshold.
  • the length of the longest common subsequence is greater than the above preset length threshold, it means that the first video and the third video are also repeated video sets, and the first video, the second video, and the third video are all repeated videos If the length of the longest common subsequence between the first video and the third video is less than or equal to the aforementioned preset length threshold, the third video needs to be divided into the first video cluster.
  • the videos that are duplicate video sets with the first video among the N videos are screened out, and are divided into the first video cluster; the first After the video cluster is determined, you can continue to determine the second video cluster, that is, determine whether the length of the longest common subsequence between the second video and the third video to the Nth video is greater than the aforementioned preset length threshold As described above, the second video among the N videos is screened out as the videos of the repeated video set, and the second video is divided into the second video clusters, until the above N videos are divided into M video clusters.
  • the matching degree data is the Manhattan distance between video feature vectors
  • a hierarchical clustering algorithm Hierarchical Density-Based Spatial Clustering of Applications with Noise, HDBSCAN
  • HDBSCAN is used to divide video clusters.
  • the clustering speed can be improved, but the method using the matching function is more accurate, and the method of dividing video clusters can be flexibly switched based on different application requirements.
  • Step 204 Perform deduplication processing on the M video clusters one by one based on a preset deduplication rule to obtain a deduplicated video data set.
  • the video data set after deduplication includes M videos, that is, only one video is reserved for each video cluster.
  • the preset deduplication rule may include at least one deduplication index data, and the deduplication index data may include videos. Any one or any combination of video-related data such as duration index, video editing frequency index, video image quality index, video format index, video quality index, etc., select different deduplication index data based on different application scenarios, the above video duration index It can be the longest video duration or the shortest video duration, etc.
  • the above-mentioned video editing frequency index can be the editing limit such as the least video editing frequency or the most video editing frequency
  • the above-mentioned video quality index can be the clearest video quality or the video quality
  • the blurriest picture quality is limited
  • the above-mentioned video format indicator may be the MP4 format or the AVI format
  • the above-mentioned video quality indicator may be the highest video quality or the lowest video quality.
  • the preset deduplication rule is to retain the video with the longest video duration in each video cluster, and delete Go to other videos to get the video data set after deduplication.
  • the de-duplication index data is any one or any combination of video-related data such as video editing times index, video image quality index, video format index, video quality index, etc., it is also based on the corresponding preset de-duplication rule to Deduplication processing is performed on each video cluster to obtain a corresponding deduplication video data set, which will not be repeated here.
  • the video data set can have a mapping relationship with the deduplication index data in the preset deduplication rules.
  • the deduplication index data can be changed manually, or the deduplication index data that best fits the video data set can be automatically selected according to the video data set. , There is no specific limitation here.
  • the M video clusters are deduplicated one by one to obtain the deduplicated video data set, which can be flexibly adapted to different application scenarios to perform the most appropriate deduplication on the video data set. Processing greatly improves the accuracy and versatility of video deduplication.
  • FIG. 3 is a schematic structural diagram of an electronic device 300 provided in an embodiment of the application, including an application processor 301, a communication interface 302, and a memory 303.
  • the application processor 301, the communication interface 302, and the memory 303 are connected to each other through a bus 304, which can be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) Bus and so on.
  • the bus 304 can be divided into an address bus, a data bus, a control bus, and so on. For ease of presentation, only a thick line is used in FIG. 3 to represent it, but it does not mean that there is only one bus or one type of bus.
  • the memory 303 is used to store a computer program
  • the computer program includes program instructions
  • the application processor 301 is configured to call the program instructions to perform the following steps:
  • the video feature data includes a feature sequence; in terms of extracting the N video feature data of the N videos in the video data set, the instructions in the program are specifically used to perform the following operations:
  • the feature vector of each frame of image data is concatenated to obtain a feature sequence corresponding to each video, and the feature sequence is used to represent the content feature of the video.
  • the instructions in the program are specifically used to perform the following operations:
  • the length of each longest common subsequence is determined as the matching degree data of every two pieces of video feature data among the N pieces of video feature data.
  • the instructions in the program are specifically used to perform the following operations:
  • the N videos included in all repeated video sets are divided into the M video clusters.
  • the instructions in the program are specifically used To do the following:
  • the M videos retained in the M video clusters are used as the deduplicated video data set.
  • the instructions in the program are specifically used to perform the following operations:
  • the 64 coefficients in the 8 ⁇ 8 area at the upper left position of each coefficient matrix are selected for quantization, and the 64-dimensional vector of each frame of image data is obtained.
  • the instructions in the program are specifically used to perform the following operations:
  • the 64-dimensional vectors are arranged in sequence according to the sequence of the time stamps to generate the feature sequence corresponding to each video.
  • an electronic device includes hardware structures and/or software modules corresponding to each function.
  • this application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiment of the present application may divide the electronic device into functional units according to the foregoing method examples.
  • each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit. It should be noted that the division of units in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 4 is a block diagram of functional units of a video processing device 400 provided by an embodiment of the present application.
  • the video processing device 400 is applied to an electronic device, and includes a processing unit 401, a communication unit 402, and a storage unit 403.
  • the processing unit 401 is configured to perform any step in the above-mentioned method embodiment, and is performing such as During data transmission such as sending, the communication unit 402 can be optionally invoked to complete the corresponding operation.
  • the detailed description will be given below.
  • the processing unit 401 is configured to extract N video feature data of N videos included in a video data set, where N is a positive integer;
  • the video feature data includes a feature sequence; in terms of extracting N video feature data of N videos in the video data set, the processing unit 401 is specifically configured to:
  • the feature vector of each frame of image data is concatenated to obtain a feature sequence corresponding to each video, and the feature sequence is used to represent the content feature of the video.
  • the processing unit 401 is specifically configured to:
  • the length of each longest common subsequence is determined as the matching degree data of every two pieces of video feature data among the N pieces of video feature data.
  • the processing unit 401 is specifically configured to:
  • the N videos included in all repeated video sets are divided into the M video clusters.
  • the processing unit 401 is specifically configured to perform deduplication processing on the M video clusters one by one based on a preset deduplication rule to obtain a deduplicated video data set.
  • the M videos retained in the M video clusters are used as the deduplicated video data set.
  • the processing unit 401 is specifically configured to:
  • the 64 coefficients in the 8 ⁇ 8 area at the upper left position of each coefficient matrix are selected for quantization, and the 64-dimensional vector of each frame of image data is obtained.
  • the processing unit 401 is specifically configured to:
  • the 64-dimensional vectors are arranged in sequence according to the sequence of the time stamps to generate the feature sequence corresponding to each video.
  • An embodiment of the present application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any method as recorded in the above method embodiment ,
  • the above-mentioned computer includes electronic equipment.
  • the embodiments of the present application also provide a computer program product.
  • the above-mentioned computer program product includes a non-transitory computer-readable storage medium storing a computer program. Part or all of the steps of the method.
  • the computer program product may be a software installation package, and the above-mentioned computer includes electronic equipment.
  • the disclosed device may be implemented in other ways.
  • the device embodiments described above are only illustrative, for example, the division of the above-mentioned units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the above integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory.
  • a number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the foregoing methods of the various embodiments of the present application.
  • the aforementioned memory includes: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • the program can be stored in a computer-readable memory, and the memory can include: a flash disk , Read-only memory (English: Read-Only Memory, abbreviation: ROM), random access device (English: Random Access Memory, abbreviation: RAM), magnetic disk or optical disc, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un procédé de traitement vidéo et un dispositif associé, ledit procédé consistant à : extraire N éléments de données de caractéristiques vidéo de N vidéos incluses dans un ensemble de données vidéo, N étant un nombre entier positif ; obtenir des données de degré de correspondance de chaque paire de données de caractéristiques vidéo parmi les N éléments de données de caractéristiques vidéo ; diviser les N vidéos en M grappes de regroupement vidéo d'après les données de degré de correspondance, M étant un nombre entier positif inférieur ou égal à N ; et effectuer un traitement de déduplication sur les M grappes de regroupement vidéo, l'une après l'autre, d'après une règle de déduplication prédéfinie afin d'obtenir un ensemble de données vidéo dédupliquées, l'ensemble de données vidéo dédupliquées comprenant M vidéos. Des vidéos répétées dans l'ensemble de données vidéo peuvent être regroupées avec précision au moyen d'un algorithme d'extraction de caractéristiques efficace, puis les vidéos répétées regroupées sont soumises à une déduplication, ce qui permet d'améliorer considérablement la précision de la déduplication vidéo.
PCT/CN2021/073333 2020-03-02 2021-01-22 Procédé de traitement vidéo et dispositif associé WO2021175040A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010136223.6 2020-03-02
CN202010136223.6A CN111274446A (zh) 2020-03-02 2020-03-02 视频处理方法及相关装置

Publications (1)

Publication Number Publication Date
WO2021175040A1 true WO2021175040A1 (fr) 2021-09-10

Family

ID=71002835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/073333 WO2021175040A1 (fr) 2020-03-02 2021-01-22 Procédé de traitement vidéo et dispositif associé

Country Status (2)

Country Link
CN (1) CN111274446A (fr)
WO (1) WO2021175040A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938649A (zh) * 2021-09-24 2022-01-14 成都智元汇信息技术股份有限公司 一种报警消息去重方法及装置
CN113965772A (zh) * 2021-10-29 2022-01-21 北京百度网讯科技有限公司 直播视频处理方法、装置、电子设备和存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274446A (zh) * 2020-03-02 2020-06-12 Oppo广东移动通信有限公司 视频处理方法及相关装置
CN114268750A (zh) * 2021-12-14 2022-04-01 咪咕音乐有限公司 视频处理方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103631786A (zh) * 2012-08-22 2014-03-12 腾讯科技(深圳)有限公司 一种视频文件的聚类方法和装置
CN103678702A (zh) * 2013-12-30 2014-03-26 优视科技有限公司 视频去重方法及装置
CN104008139A (zh) * 2014-05-08 2014-08-27 北京奇艺世纪科技有限公司 视频索引表的创建方法和装置,视频的推荐方法和装置
US8953836B1 (en) * 2012-01-31 2015-02-10 Google Inc. Real-time duplicate detection for uploaded videos
CN108307240A (zh) * 2018-02-12 2018-07-20 北京百度网讯科技有限公司 视频推荐方法和装置
CN108875062A (zh) * 2018-06-26 2018-11-23 北京奇艺世纪科技有限公司 一种重复视频的确定方法及装置
CN111274446A (zh) * 2020-03-02 2020-06-12 Oppo广东移动通信有限公司 视频处理方法及相关装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492127A (zh) * 2018-11-12 2019-03-19 网易传媒科技(北京)有限公司 数据处理方法、装置、介质和计算设备
CN110222511B (zh) * 2019-06-21 2021-04-23 杭州安恒信息技术股份有限公司 恶意软件家族识别方法、装置及电子设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8953836B1 (en) * 2012-01-31 2015-02-10 Google Inc. Real-time duplicate detection for uploaded videos
CN103631786A (zh) * 2012-08-22 2014-03-12 腾讯科技(深圳)有限公司 一种视频文件的聚类方法和装置
CN103678702A (zh) * 2013-12-30 2014-03-26 优视科技有限公司 视频去重方法及装置
CN104008139A (zh) * 2014-05-08 2014-08-27 北京奇艺世纪科技有限公司 视频索引表的创建方法和装置,视频的推荐方法和装置
CN108307240A (zh) * 2018-02-12 2018-07-20 北京百度网讯科技有限公司 视频推荐方法和装置
CN108875062A (zh) * 2018-06-26 2018-11-23 北京奇艺世纪科技有限公司 一种重复视频的确定方法及装置
CN111274446A (zh) * 2020-03-02 2020-06-12 Oppo广东移动通信有限公司 视频处理方法及相关装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938649A (zh) * 2021-09-24 2022-01-14 成都智元汇信息技术股份有限公司 一种报警消息去重方法及装置
CN113965772A (zh) * 2021-10-29 2022-01-21 北京百度网讯科技有限公司 直播视频处理方法、装置、电子设备和存储介质
CN113965772B (zh) * 2021-10-29 2024-05-10 北京百度网讯科技有限公司 直播视频处理方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN111274446A (zh) 2020-06-12

Similar Documents

Publication Publication Date Title
WO2021175040A1 (fr) Procédé de traitement vidéo et dispositif associé
CN109493350B (zh) 人像分割方法及装置
CN110348294B (zh) Pdf文档中图表的定位方法、装置及计算机设备
US11886492B2 (en) Method of matching image and apparatus thereof, device, medium and program product
WO2020024744A1 (fr) Procédé de détection de points caractéristiques d'image, dispositif terminal, et support de stockage
US11714921B2 (en) Image processing method with ash code on local feature vectors, image processing device and storage medium
CN104661037B (zh) 压缩图像量化表篡改的检测方法和系统
WO2019238125A1 (fr) Procédé de traitement d'informations, dispositif associé et support d'informations informatique
CN112614110B (zh) 评估图像质量的方法、装置及终端设备
WO2022166258A1 (fr) Procédé et appareil de reconnaissance de comportement, dispositif de terminal et support de stockage lisible par ordinateur
WO2019127833A1 (fr) Procédé de changement de langue de système, dispositif terminal, appareil et support de mémoire lisible
CN106503112B (zh) 视频检索方法和装置
CN112581355A (zh) 图像处理方法、装置、电子设备和计算机可读介质
CN108960041B (zh) 图像特征提取方法及装置
WO2021051562A1 (fr) Procédé et appareil de positionnement de point de caractéristique faciale, dispositif informatique et support de stockage
CN112598074B (zh) 图像处理方法及装置、计算机可读存储介质和电子设备
CN116797510A (zh) 图像处理方法、装置、计算机设备和存储介质
WO2021164329A1 (fr) Procédé et appareil de traitement d'image, et dispositif électronique et support de stockage lisible par ordinateur
CN111143619B (zh) 视频指纹生成方法、检索方法、电子设备及介质
EP4275152A1 (fr) Procédé de formation d'un réseau neuronal configuré pour convertir des images 2d en modèles 3d
CN110619362B (zh) 一种基于感知与像差的视频内容比对方法及装置
Du et al. Image hashing for tamper detection with multiview embedding and perceptual saliency
CN113743533A (zh) 一种图片聚类方法、装置及存储介质
CN113191376A (zh) 图像处理方法、装置、电子设备和可读存储介质
CN112487943A (zh) 关键帧去重的方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21763545

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21763545

Country of ref document: EP

Kind code of ref document: A1