CN112653885B - Video repetition degree acquisition method, electronic equipment and storage medium - Google Patents

Video repetition degree acquisition method, electronic equipment and storage medium Download PDF

Info

Publication number
CN112653885B
CN112653885B CN202011455839.6A CN202011455839A CN112653885B CN 112653885 B CN112653885 B CN 112653885B CN 202011455839 A CN202011455839 A CN 202011455839A CN 112653885 B CN112653885 B CN 112653885B
Authority
CN
China
Prior art keywords
video
image
video image
extracted
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011455839.6A
Other languages
Chinese (zh)
Other versions
CN112653885A (en
Inventor
崔英林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lianshang Network Technology Co Ltd
Original Assignee
Shanghai Lianshang Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lianshang Network Technology Co Ltd filed Critical Shanghai Lianshang Network Technology Co Ltd
Priority to CN202011455839.6A priority Critical patent/CN112653885B/en
Publication of CN112653885A publication Critical patent/CN112653885A/en
Application granted granted Critical
Publication of CN112653885B publication Critical patent/CN112653885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure discloses a video repeatability acquisition method, an electronic device and a storage medium, wherein the method can comprise the following steps: extracting M frames of video images from the first video, wherein M is a positive integer greater than one and smaller than or equal to the total frame number included in the first video, and respectively acquiring corresponding reverse sequence images for each extracted frame of video image; extracting N frames of video images from the second video, wherein N is a positive integer greater than one and less than or equal to the total frame number included in the second video, and respectively acquiring the similarity between each extracted video image and each reverse sequence image; and determining the repeatability between the first video and the second video according to the acquired similarity. By applying the scheme disclosed by the disclosure, the computing resource and time cost can be saved, the processing efficiency can be improved, and the like.

Description

Video repetition degree acquisition method, electronic equipment and storage medium
Technical Field
The present disclosure relates to video recognition technology, and in particular, to a video repeatability acquisition method, an electronic device, and a storage medium.
Background
In practical applications, in many situations, it is necessary to obtain the repeatability between one video and another video. The existing video repeatability acquisition method is more complex, and if various complex calculations are needed, more calculation resources, longer time cost and the like are needed to be consumed.
Disclosure of Invention
The disclosure provides a video repeatability acquisition method, electronic equipment and a storage medium.
A video repetition rate acquisition method, comprising:
extracting M frames of video images from a first video, wherein M is a positive integer greater than one and smaller than or equal to the total frame number included in the first video, and respectively acquiring corresponding reverse sequence images for each extracted frame of video image;
extracting N frames of video images from a second video, wherein N is a positive integer greater than one and less than or equal to the total frame number included in the second video, and respectively acquiring the similarity between the video images and each reverse sequence image for each extracted frame of video image;
and determining the repeatability between the first video and the second video according to the acquired similarity.
An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a method as described above.
A computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
One embodiment of the above disclosure has the following advantages or benefits: the method and the device can respectively extract video images of the first video and the second video, can acquire reverse sequence images of all video images extracted from the first video and obtain the similarity between all video images extracted from the second video and all reverse sequence images, further can determine the repeatability between the two videos according to the obtained similarity, are quick and easy to realize in the whole process, save computing resources and time cost, and improve processing efficiency and the like.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flowchart of an embodiment of a video repetition rate acquisition method according to the present disclosure;
FIG. 2 is a schematic diagram of an optimal continuous path according to the present disclosure;
fig. 3 is a schematic diagram of an overall implementation process of the video repeatability obtaining method according to the present disclosure;
fig. 4 illustrates a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In addition, it should be understood that the term "and/or" herein is merely one association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
Fig. 1 is a flowchart of an embodiment of a video repeatability obtaining method according to the present disclosure. As shown in fig. 1, the following detailed implementation is included.
In step 101, M frames of video images are extracted from the first video, where M is a positive integer greater than one and less than or equal to the total number of frames included in the first video, and corresponding reverse sequence images are respectively acquired for each extracted frame of video image.
In step 102, N frames of video images are extracted from the second video, where N is a positive integer greater than one and less than or equal to the total number of frames included in the second video, and for each extracted frame of video image, the similarity between the video image and each reverse sequence image is obtained.
In step 103, the repetition degree between the first video and the second video is determined according to the acquired similarity.
It can be seen that in the scheme of the embodiment of the method, video image extraction can be performed on the first video and the second video respectively, and the reverse sequence image of each video image extracted from the first video and the similarity between each video image extracted from the second video and each reverse sequence image can be obtained, so that the repeatability between the two videos can be determined according to the obtained similarity, the whole process is fast and easy to implement, the calculation resource and the time cost are saved, the processing efficiency is improved, and the like.
In the solution described in the present disclosure, there is no limitation on how to extract the video image from the first video, such as but not limited to: and taking each frame in the first video as an extracted video image, or extracting one frame of video image from the first video every interval of L frames, wherein L is a positive integer, and the specific value can be determined according to actual needs.
Assuming that 100 frames are included in the first video, all of the 100 frames may be taken as extracted video images, or assuming that L has a value of 1, one frame of video image may be extracted from the first video at 1 frame intervals, so that 50 frames of video images and the like are extracted in total.
Likewise, the manner in which video images are extracted from the second video may include, but is not limited to: and taking each frame in the second video as the extracted video image, or extracting one frame of video image from the second video every interval L frames.
Generally, the video image is extracted from the first video in the same way as the video image is extracted from the second video. For example, each frame in the first video may be used as an extracted video image, and each frame in the second video may be used as an extracted video image, or one frame of video image may be extracted from the first video at 1 frame intervals, and one frame of video image may be extracted from the second video at 1 frame intervals.
For the mode of taking each frame in the video as the extracted video image, more video images can be extracted, so that the accuracy of a subsequent processing result and the like can be improved, and for the mode of interval extraction, the number of the extracted video images can be reduced, and correspondingly, the workload of subsequent processing can be reduced, so that the processing efficiency and the like can be improved. Which way is adopted in particular can be determined according to actual needs.
After M frames of video images are extracted from the first video, the corresponding reverse sequence image may be acquired for each extracted frame of video image, as described in step 101. Specifically, for each extracted video image, the values of the pixel points in the video image can be respectively inverted, so as to obtain an inverted sequence image corresponding to the video image.
Assuming that 50 frames of video images, respectively numbered video image 1-video image 50, are extracted from the first video, an inverse sequence image corresponding to video image 1, an inverse sequence image corresponding to video image 2, an inverse sequence image corresponding to video image 3, … …, an inverse sequence image corresponding to video image 50, and the like can be obtained, respectively, so that 50 inverse sequence images can be obtained in total.
The extracted video image is typically a Bitmap (BMP) image, i.e., the extracted video image is stored in a memory in BMP format. For image storage, 256-bit artwork is typically used, and one Byte (Byte) represents the value of one pixel.
Correspondingly, for each frame of video image extracted from the first video, the Byte value of each pixel point in the video image can be respectively inverted, so that an inverted sequence image corresponding to the video image is obtained.
In practical application, after one frame of video image is extracted from the first video, the reverse sequence image corresponding to the video image is obtained, or after all M frames of video images are extracted, the corresponding reverse sequence images are respectively obtained for each video image. Which way is adopted in particular can be determined according to actual needs.
As described in step 102, N frames of video images may be extracted from the second video, and for each extracted frame of video image, a similarity between the video image and each of the reverse sequence images may be obtained.
Assuming that 50 frames of video images are extracted from the second video, respectively numbered as video image 101-video image 150, and assuming that there are 50 reverse-sequence images in total, then for video image 101, the similarity between video image 101 and each reverse-sequence image can be obtained, so that 50 similarities can be obtained, for video image 102, the similarity between video image 102 and each reverse-sequence image can be obtained, 50 similarities can be obtained, and so on.
In practical application, after one frame of video image is extracted from the second video, the similarity between the video image and each reverse sequence image can be obtained, or after all N frames of video images are extracted, the similarity between the video image and each reverse sequence image can be obtained for each video image. Which way is adopted in particular can be determined according to actual needs.
Specifically, for each frame of video image extracted from the second video, the following processing may be performed, respectively: and carrying out coupling operation on the video image and each reverse sequence image to obtain M coupling result images, wherein the sizes of the video image, the reverse sequence image and the coupling result images are the same, counting the number of pixels with non-zero values in the coupling result images respectively aiming at each coupling result image, and taking the counting result as the similarity between the video image and the reverse sequence image corresponding to the coupling result image, wherein for any pixel in the coupling result image, if the value of the corresponding pixel in the video image is the same as the value of the corresponding pixel in the reverse sequence image, the value of the pixel can be set to zero, otherwise, the pixel is not zero, and the corresponding pixel is the pixel with the same position.
In the scheme of the disclosure, the extracted video image may be any size, and may be preprocessed, that is, adjusted to a predetermined size, and after adjustment, the size of the extracted video image from the first video, the size of the extracted video image from the second video, the size of the anti-sequence image, or the size of the coupling result image are the same, that is, the predetermined size. The specific value of the predetermined size can be determined according to actual needs.
For each frame of video image extracted from the second video, the video image can be respectively coupled with M reverse sequence images, so that M coupling result images are obtained. For example, for each reverse sequence image, corresponding pixel points in the video image and the reverse sequence image can be respectively processed according to Byte and operation, so that a coupling result image corresponding to the reverse sequence image is obtained, the number of pixel points with a value different from zero in the coupling result image can be counted, and the counted result is used as the similarity between the video image and the reverse sequence image.
When a certain video image and a certain anti-sequence image are coupled, namely, when corresponding pixel points in the video image and the anti-sequence image are subjected to Byte and operation, taking the pixel point with the coordinate position of (10, 10) in the obtained coupling result image as an example, if the value of the pixel point with the coordinate position of (10, 10) in the video image is the same as the value of the pixel point with the coordinate position of (10, 10) in the anti-sequence image, the pixel point with the coordinate position of (10, 10) in the coupling result image can be set to be 0, otherwise, the pixel point can be set to be 1, the number of the pixel points with the coordinate position of not being 0 in the coupling result image can be counted, and the counted result can be used as the similarity between the video image and the anti-sequence image.
Thus, for each frame of video image extracted from the second video, the following result sets may be obtained, respectively:
A1 A2 A3 …
B1 19 5 10 …
wherein B1 represents one frame of video image extracted from the second video, A1, A2, A3, etc. represent each frame of video image extracted from the first video, 19 represents the similarity between the video image B1 and the reverse sequence image corresponding to the video image A1, 5 represents the similarity between the video image B1 and the reverse sequence image corresponding to the video image A2, and so on.
According to the result set corresponding to each video image extracted from the second video, a result matrix of N rows and M columns can be generated.
The ith row in the result matrix corresponds to the ith frame of video image extracted from the second video, i is less than or equal to 1 and less than or equal to N, the ith frame of video image represents the video image at the ith position after the N frames of video images extracted from the second video are sequenced according to the sequence from the beginning to the end of the extraction time, the jth column in the result matrix corresponds to the jth frame of video image extracted from the first video, j is less than or equal to 1 and less than or equal to M, the jth frame of video image represents the video image at the jth position after the M frames of video images extracted from the first video are sequenced according to the sequence from the beginning to the end of the extraction time, and each element in the result matrix represents the similarity between the video image corresponding to the row and the reverse sequence image of the video image corresponding to the column.
The result matrix may be as follows:
wherein B1, B2, B3, etc. represent each frame of video image extracted from the second video, A1, A2, A3, etc. represent each frame of video image extracted from the first video, respectively, and A1 is the video image extracted from the first video first, A2 next, and so on, and similarly B1 is the video image extracted from the second video first, B2 next, and so on.
The degree of repetition between the first video and the second video may be determined based on the result matrix. Specifically, under the principle that the path length is as long as possible and the sum of elements on the path is as small as possible, the two factors of the path length and the sum of elements on the path can be balanced, an optimal continuous path is determined from a result matrix, rows of the elements on the optimal continuous path are different, columns of the elements are different, and the repeatability between the first video and the second video can be determined according to the path length of the optimal continuous path, the sum of elements on the optimal continuous path, M and the number of pixel points included in the video image.
When determining the optimal continuous path from the result matrix, it is desirable that the path length is as long as possible, and the sum of the elements on the path is as small as possible, and the smaller the path length is, the larger the similarity is, the larger the number of similar frames is, but the sum of the elements on the path length and the path is two mutually influencing factors, for example, the increase of the path length may cause the sum of the elements on the path to be increased, so that the two factors need to be balanced, and the result of comprehensively evaluating the optimal, namely the optimal continuous path, is selected according to the time continuity of the frames. Or it may be understood that, for the factor of the path length, the score is higher as the value is larger, and for the factor of the sum of the elements on the path, the score is higher as the value is smaller, and when the score of one factor is increased, the score of the other factor may be reduced, so that the two factors need to be balanced, and the optimal continuous path with the highest comprehensive score is selected.
As shown in fig. 2, fig. 2 is a schematic diagram of an optimal continuous path according to the present disclosure. How to determine the optimal continuous path from the result matrix is not limited, and various existing maturation algorithms can be used.
And determining the repeatability between the first video and the second video according to the path length of the optimal continuous path, the sum of the addition of the elements on the optimal continuous path, M and the number of pixel points included in the video image.
For example, a quotient of the sum of the elements on the optimal continuous path and the number of pixels included in the video image may be calculated, and a quotient of the path length of the optimal continuous path and M may be calculated, and then a product of the two quotients may be calculated, and the product may be used as the repetition degree between the first video and the second video.
The method comprises the following steps: result= (P/W) × (MaxLength/M); (1)
Wherein P represents the sum of the elements on the optimal continuous path, W represents the number of pixels included in the video image, maxLength represents the path length of the optimal continuous path, M represents the number of video images extracted from the first video, and Result represents the degree of repetition.
As can be seen from the above description, fig. 3 is a schematic diagram of an overall implementation process of the video repeatability obtaining method according to the present disclosure, and the detailed implementation is referred to the above related description and will not be repeated.
In addition, it should be noted that, for simplicity of description, the foregoing method embodiments are depicted as a series of acts, but it should be understood and appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the disclosure. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all of the preferred embodiments, and that the acts and modules referred to are not necessarily required by the present disclosure.
In a word, by adopting the scheme disclosed by the disclosure, the repeatability between the first video and the second video can be determined through the processing of video image extraction, reverse sequence image generation, coupling operation, result matrix generation, optimal continuous path determination, repeatability calculation and the like, the whole process is fast and easy to realize, the computing resource and time cost are saved, the processing efficiency is improved, and the method is suitable for being used in high concurrency engineering, and in addition, the method is suitable for videos of any type, duration and the like, and has wide applicability.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 4 illustrates a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile apparatuses, such as personal digital assistants, cellular telephones, smartphones, wearable devices, and other similar computing apparatuses. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 400 includes a computing unit 401 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the various methods and processes described above, such as the methods described in this disclosure. For example, in some embodiments, the methods described in the present disclosure may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. One or more steps of the methods described in this disclosure may be performed when the computer program is loaded into RAM 403 and executed by computing unit 401. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the methods described in the present disclosure by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (6)

1. A video repetition rate obtaining method, comprising:
extracting M frames of video images from a first video, wherein M is a positive integer greater than one and less than or equal to the total frame number included in the first video, and respectively acquiring corresponding reverse sequence images for each extracted frame of video image, including: for each extracted video image, respectively inverting the values of all pixel points in the video image to obtain an inverted sequence image corresponding to the video image;
extracting N frames of video images from a second video, where N is a positive integer greater than one and less than or equal to a total frame number included in the second video, and for each extracted frame of video image, respectively obtaining a similarity between the video image and each reverse sequence image, including: respectively carrying out coupling operation on the video image and each reverse sequence image to obtain M coupling result images, wherein the sizes of the video image, the reverse sequence image and the coupling result images are the same, respectively counting the number of pixel points with non-zero values in the coupling result images aiming at each coupling result image, and taking the counting result as the similarity between the video image and the reverse sequence image corresponding to the coupling result image;
determining the repeatability between the first video and the second video according to the acquired similarity comprises the following steps: generating N rows and M columns of result matrixes according to the acquired similarity; the ith row in the result matrix corresponds to an ith frame of video image extracted from the second video, i is more than or equal to 1 and less than or equal to N, the jth column in the result matrix corresponds to a jth frame of video image extracted from the first video, j is more than or equal to 1 and less than or equal to M, and each element in the result matrix represents the similarity between the video image corresponding to the row and the reverse sequence image of the video image corresponding to the column; and under the principle that the path length is as long as possible and the sum of elements on the path is as small as possible, balancing the path length and the sum of elements on the path, determining an optimal continuous path from the result matrix, wherein the rows of the elements on the optimal continuous path are different, the columns of the elements are different, calculating the quotient of the sum of elements on the optimal continuous path and the number of pixel points included in the video image, calculating the quotient of the path length of the optimal continuous path and the M, calculating the product of the two quotients, and taking the product as the repeatability between the first video and the second video.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the extracting M frames of video images from the first video includes: each frame in the first video is used as an extracted video image, or each interval L frames, one frame of video image is extracted from the first video, and L is a positive integer;
the extracting N frames of video images from the second video includes: and taking each frame in the second video as the extracted video image, or extracting one frame of video image from the second video every interval L frames.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
and for any pixel point in the coupling result image, if the value of the corresponding pixel point in the video image of the pixel point is the same as the value of the corresponding pixel point in the reverse sequence image of the pixel point, setting the value of the pixel point to be zero, otherwise, setting the value of the pixel point to be not zero, and setting the corresponding pixel point to be the pixel point at the same position.
4. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the video image includes: a bitmap image;
the inverting the value of each pixel point in the video image comprises: inverting Byte values of all pixel points in the video image;
the step of coupling the video image with each reverse sequence image to obtain M coupling result images comprises the following steps: and aiming at each reverse sequence image, respectively performing Byte and operation on the video image and corresponding pixel points in the reverse sequence image to obtain a coupling result image corresponding to the reverse sequence image.
5. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
6. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1-4.
CN202011455839.6A 2020-12-10 2020-12-10 Video repetition degree acquisition method, electronic equipment and storage medium Active CN112653885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011455839.6A CN112653885B (en) 2020-12-10 2020-12-10 Video repetition degree acquisition method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011455839.6A CN112653885B (en) 2020-12-10 2020-12-10 Video repetition degree acquisition method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112653885A CN112653885A (en) 2021-04-13
CN112653885B true CN112653885B (en) 2023-10-03

Family

ID=75353724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011455839.6A Active CN112653885B (en) 2020-12-10 2020-12-10 Video repetition degree acquisition method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112653885B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101042735A (en) * 2006-03-23 2007-09-26 株式会社理光 Image binarization method and device
CN104217411A (en) * 2014-09-02 2014-12-17 济南大学 Fast splicing method for irregularly broken single-sided images
CN106791679A (en) * 2016-12-30 2017-05-31 东方网力科技股份有限公司 A kind of determination method and device of video transmission path
CN109246446A (en) * 2018-11-09 2019-01-18 东方明珠新媒体股份有限公司 Compare the method, apparatus and equipment of video content similitude
CN110324660A (en) * 2018-03-29 2019-10-11 北京字节跳动网络技术有限公司 A kind of judgment method and device repeating video
CN111899252A (en) * 2020-08-06 2020-11-06 腾讯科技(深圳)有限公司 Artificial intelligence-based pathological image processing method and device
CN111935506A (en) * 2020-08-19 2020-11-13 百度时代网络技术(北京)有限公司 Method and apparatus for determining repeating video frames

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7594177B2 (en) * 2004-12-08 2009-09-22 Microsoft Corporation System and method for video browsing using a cluster index
JP2008227702A (en) * 2007-03-09 2008-09-25 Oki Electric Ind Co Ltd Motion vector search device, motion vector search method, and motion vector search program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101042735A (en) * 2006-03-23 2007-09-26 株式会社理光 Image binarization method and device
CN104217411A (en) * 2014-09-02 2014-12-17 济南大学 Fast splicing method for irregularly broken single-sided images
CN106791679A (en) * 2016-12-30 2017-05-31 东方网力科技股份有限公司 A kind of determination method and device of video transmission path
CN110324660A (en) * 2018-03-29 2019-10-11 北京字节跳动网络技术有限公司 A kind of judgment method and device repeating video
CN109246446A (en) * 2018-11-09 2019-01-18 东方明珠新媒体股份有限公司 Compare the method, apparatus and equipment of video content similitude
CN111899252A (en) * 2020-08-06 2020-11-06 腾讯科技(深圳)有限公司 Artificial intelligence-based pathological image processing method and device
CN111935506A (en) * 2020-08-19 2020-11-13 百度时代网络技术(北京)有限公司 Method and apparatus for determining repeating video frames

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
铁路场景下的红外与可见光图像自动配准算法;周杏芳;刘修扬;;电子测量技术(第08期);全文 *

Also Published As

Publication number Publication date
CN112653885A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN112562069B (en) Method, device, equipment and storage medium for constructing three-dimensional model
CN113436100B (en) Method, apparatus, device, medium, and article for repairing video
CN112561079A (en) Distributed model training apparatus, method and computer program product
CN112488060B (en) Target detection method, device, equipment and medium
CN112580732B (en) Model training method, device, apparatus, storage medium and program product
CN114693934B (en) Training method of semantic segmentation model, video semantic segmentation method and device
CN113657483A (en) Model training method, target detection method, device, equipment and storage medium
CN113378855A (en) Method for processing multitask, related device and computer program product
CN114511743B (en) Detection model training, target detection method, device, equipment, medium and product
CN114463551A (en) Image processing method, image processing device, storage medium and electronic equipment
CN113344213A (en) Knowledge distillation method, knowledge distillation device, electronic equipment and computer readable storage medium
CN115499707B (en) Video similarity determination method and device
CN112653885B (en) Video repetition degree acquisition method, electronic equipment and storage medium
CN115146226B (en) Stream data processing method, device and equipment based on tensor compression method
CN115294396B (en) Backbone network training method and image classification method
CN113642654B (en) Image feature fusion method and device, electronic equipment and storage medium
CN113361621B (en) Method and device for training model
CN113361575B (en) Model training method and device and electronic equipment
CN113556575A (en) Method, apparatus, device, medium and product for compressing data
CN113792804A (en) Training method of image recognition model, image recognition method, device and equipment
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN115641481A (en) Method and device for training image processing model and image processing
CN113657482A (en) Model training method, target detection method, device, equipment and storage medium
CN114282664A (en) Self-feedback model training method and device, road side equipment and cloud control platform
CN112561061A (en) Neural network thinning method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant