CN106649505A - Video matching method and application and computing equipment - Google Patents
Video matching method and application and computing equipment Download PDFInfo
- Publication number
- CN106649505A CN106649505A CN201610889659.6A CN201610889659A CN106649505A CN 106649505 A CN106649505 A CN 106649505A CN 201610889659 A CN201610889659 A CN 201610889659A CN 106649505 A CN106649505 A CN 106649505A
- Authority
- CN
- China
- Prior art keywords
- video
- characteristic information
- video block
- subtree
- dimension
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/71—Indexing; Data structures therefor; Storage structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
Abstract
The invention discloses a video matching method and application and computing equipment. The video matching application comprises an acquisition unit, a partitioning unit, a feature extraction unit, a K-dimension tree construction unit and a matching unit. The acquisition unit acquires a first video and a second video; the partitioning unit selects an image block with a predetermined window size on a current frame and image blocks on multiple adjacent frames in front of and behind the current frame with each pixel point being a center; the partitioning unit uses the image blocks selected from the current frame and the adjacent frames in front of and behind the current frame as video blocks corresponding to the pixel points; the feature extraction unit is suitable for executing Walsh-Hadamard transformation on the video blocks so that at least one part of video features of each video block can be integrated into a predetermined dimension; the feature extraction unit extracts information of each predetermined dimension to serve as feature information; the K-dimension tree construction unit establishes a K-dimension tree about the first video; the matching unit is suitable for searching the K-dimension tree for the feature information most similar to the feature information of the second video to serve as matching feature information.
Description
Technical field
The present invention relates to video technique field, more particularly to method, application and the computing device for matching video.
Background technology
In the application scenarios such as video compress, video search, video matching technology is widely used.At present, video matching
Algorithm is typically based on the two-dimensional space of picture frame and is processed.In other words, for each pixel in video sequence per two field picture
Point, generally using its neighborhood block as its corresponding image block.Existing matching technique is determined by matching the similarity of image block
Similarity between pixel.
However, existing image block matching way does not consider very well association of the sequence of frames of video on time dimension
Property.
Therefore, the present invention proposes a kind of technical scheme of new matching video.
The content of the invention
For this purpose, the present invention provides a kind of technical scheme of new matching video, effectively solve and at least one ask above
Topic.
According to an aspect of the present invention, there is provided a kind of method of matching video, it is suitable to perform in the terminal.The party
Method comprises the steps.Obtain the first video and the second video to be matched.Respectively with the first video and the second video in, per frame
Centered on each pixel of image, the image block of predetermined window size on this frame is chosen, and choose adjacent before and after the pixel
The image block of additional space position in multiframe, and using selected image block from this frame and in front and back adjacent multiframe as the pixel
The corresponding video block of point.Video block respectively to the first video and the video block of the second video, perform Walsh-Hadamard and become
Change, so that at least a portion video features in each video block are focused on into predetermined dimension.Extract each transformed video block
Predetermined dimension information as the video block characteristic information.A dimension in the characteristic information of the video block for selecting the first video
Degree, and based on institute's selected dimension foundation with regard to the characteristic information of all video blocks of the first video K Wei Shu (Kd-tree).It is right
In the characteristic information of each video block to be matched of the second video, from search and its similarity highest in the K dimension trees set up
Characteristic information is used as its corresponding matching characteristic information.
Alternatively, in the method for matching video of the invention, video block and second respectively to the first video are regarded
The video block of frequency, performs Walsh-Hadanjard Transform, pre- so that at least a portion video features in each video block are focused on
The step of determining dimension include, for each video block to be transformed, according to following formula line translation is entered:
Wherein, HnFor hadamard matrix, V is the matrix of video block to be transformed,It is the square of the video block through converting
Battle array.
Alternatively, in the method for matching video of the invention, set up with regard to the first video based on the selected dimension of institute
The operation of K Wei Shu (Kd-tree) of characteristic information of all video blocks include that recurrence performs the operation for setting up tree construction, directly
To all first degree left subtrees and first degree right subtree, each node quantity is less than threshold value.Wherein, it is every time performed to build
The operation of vertical tree construction comprises the steps.For the characteristic information of the video block for belonging to the tree construction to be set up, will be selected
The characteristic information of the video block that dimension is intermediate value is determined as the root node that set up tree construction.The intermediate value will be less than in the dimension
Characteristic information is assigned to the left subtree of the root node.The root node will be assigned in the dimension more than the characteristic information of the intermediate value
Right subtree.
Alternatively, in the method for matching video of the invention, for each video block to be matched of the second video
Characteristic information, from being set up in the K dimension trees with regard to the first video search with its similarity highest characteristic information as it
The step of corresponding matching characteristic information, includes that the characteristic information, recurrence to video block to be matched performs the behaviour for selecting subtree
Make, until selecting subtree for one in one in first degree left subtree or first degree right subtree.Wherein, perform every time and select
The operation of subtree comprises the steps.Judge whether the currently selected root node for determining tree is to be matched more than this in selected dimension
Characteristic information.When more than the characteristic information to be matched, the left subtree of present tree is selected.Less than the feature letter to be matched
During breath, the right subtree of present tree is selected.The characteristic information to be matched is calculated with each knot in the first degree subtree selected
The similarity of point, and using similarity highest node as the matching characteristic information.
Alternatively, in the method for matching video of the invention, the characteristic information of the video block to be matched is calculated
Comprise the steps with the operation of the similarity of each node in selected first degree subtree.Calculated according to following formula and treated
The Euclidean distance of the characteristic information of the video block of matching and any one node in the first degree subtree selected:
Wherein, p represents the characteristic information of video block to be matched, and q represents a knot in selected first degree subtree
Point, p and q are N-dimensional vector.According to the calculated Euclidean distance of institute, the similarity of p and q is determined.
Alternatively, the method for matching video of the invention also comprises the steps.Based on video block to be matched correspondence
At least a portion pixel, corresponding lowermost level subtree, calculate successively the spy of the video block to be matched in pixel neighborhood of a point
Reference ceases the similarity of node in lowermost level subtree corresponding with this.In this highest similarity obtained by calculating more than current
During similarity corresponding with characteristic information, the matching characteristic information is updated for the corresponding node of this highest similarity.
Alternatively, the method for matching video of the invention also comprises the steps.Based on the video block to be matched
The corresponding lowermost level subtree of pixel on an at least frame in adjacent multiframe before and after corresponding pixel points, additional space position, successively
Calculate the characteristic information of the video block to be matched and the similarity of node in corresponding lowermost level subtree.Obtained by this calculating
During corresponding more than the current matching characteristic information similarity of highest similarity, the matching characteristic information is updated for this highest phase
Seemingly spend corresponding node.
According to another aspect of the present invention, there is provided a kind of application of matching video, it is suitable to reside in computing device.Should
With including acquiring unit, blocking unit, feature extraction unit, K dimensions tree construction unit and matching unit.Acquiring unit is suitable to obtain
First video to be matched and the second video.Blocking unit be suitable to respectively with the first video and the second video in, per two field picture
Centered on each pixel, the image block of predetermined window size on this frame is chosen, and choose before and after the pixel in adjacent multiframe
The image block of additional space position.Blocking unit is using selected image block from this frame and in front and back adjacent multiframe as the pixel
The corresponding video block of point.Feature extraction unit is suitable to the video block of video block respectively to the first video and the second video, performs
Walsh-Hadanjard Transform, so that at least a portion video features in each video block are focused on into predetermined dimension.Feature extraction
Unit extracts the characteristic information of the information as the video block of the predetermined dimension of each transformed video block.K Wei Shu build single
Unit is suitable to a dimension in the characteristic information of the video block for selecting the first video, and is regarded with regard to first based on the selected dimension foundation of institute
The K Wei Shu (Kd-tree) of the characteristic information of all video blocks of frequency.For the spy of each video block to be matched of the second video
Reference ceases, and matching unit is suitable to corresponding as its with its similarity highest characteristic information from search in the K dimension trees set up
Matching characteristic information.
According to a further aspect of the invention, there is provided a kind of computing device, including:It is of the invention to match answering for video
With.
To sum up, it is of the invention matching video technical scheme can set up 3 D video block (i.e. video block include one
Relevant information of the individual pixel on field and time dimension), and by Walsh-Hadanjard Transform by each video block
Multiple dimensions on scattered characteristic information carry out centralization.On this basis, technical scheme can be to video block
Dimensionality reduction operation is carried out, and original high-dimensional video block is characterized by the characteristic information of low dimensional.Further, skill of the invention
The set of the characteristic information of video (i.e. the first video in the present invention) to be searched can be configured to K dimension tree constructions by art scheme.
So, the technical scheme of matching video of the invention can be scanned for rapidly to K Wei Shu, and be determined and video to be matched
(that what is selected in the present invention is minimum for a higher range of convergence of similarity for block (video block of the second video i.e. in the present invention)
Level subtree).Further, the technical scheme of matching video of the invention can quickly select from the range of convergence with it is to be matched
One node of characteristic information similarity highest of video block, and using the node as matching characteristic information.In addition, the present invention
The technical scheme of matching video is by the relevant position on the Qian Hou consecutive frame or the corresponding lowermost level subtree of neighborhood territory pixel point
Search and characteristic information similarity highest node to be matched, can further improve matching accuracy.
Description of the drawings
In order to realize above-mentioned and related purpose, some illustrative sides are described herein in conjunction with explained below and accompanying drawing
Face, these aspects indicate various modes that can be to put into practice principles disclosed herein, and all aspects and its equivalent aspect
It is intended to fall under in the range of theme required for protection.By being read in conjunction with the accompanying detailed description below, the disclosure it is above-mentioned
And other purposes, feature and advantage will be apparent from.Throughout the disclosure, identical reference generally refers to identical
Part or element.
Fig. 1 shows the schematic diagram of computing device 100 according to some embodiments of the invention;
Fig. 2 shows the flow chart of the method 200 of matching video according to some embodiments of the invention;
Fig. 3 shows the schematic diagram of picture frame sequence according to an embodiment of the invention;
Fig. 4 A-4B are respectively illustrated according to the schematic diagram that K dimension tree processes are set up in one embodiment of the invention;
Fig. 5 shows the flow chart of the method 500 of the matching video according to yet other embodiments of the invention;And
Fig. 6 shows the schematic diagram of the application 600 of matching video according to some embodiments of the invention.
Specific embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in accompanying drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure and should not be by embodiments set forth here
Limited.On the contrary, there is provided these embodiments are able to be best understood from the disclosure, and can be by the scope of the present disclosure
Complete conveys to those skilled in the art.
Fig. 1 shows the block diagram of computing device 100 according to some embodiments of the invention.In basic configuration 102, meter
Calculation equipment 100 typically comprises system storage 106 and one or more processor 104.Memory bus 108 can be used for
Communication between processor 104 and system storage 106.
Depending on desired configuration, processor 104 can be any kind of process, including but not limited to:Microprocessor
((μ P), microcontroller (μ C), digital information processor (DSP) or any combination of them.Processor 104 can include all
Such as cache, the processor core of one or more rank of on-chip cache 110 and second level cache 112 etc
114 and register 116.The processor core 114 of example can include arithmetic and logical unit (ALU), floating-point unit (FPU),
Digital signal processing core (DSP core) or any combination of them.The Memory Controller 118 of example can be with processor
104 are used together, or in some implementations, Memory Controller 118 can be an interior section of processor 104.
Depending on desired configuration, system storage 106 can be any type of memory, including but not limited to:Easily
The property lost memory (RAM), nonvolatile memory (ROM, flash memory etc.) or any combination of them.System is stored
Device 106 can include operating system 120, one or more is using 122 and routine data 124.
Computing device 100 can also include contributing to from various interface equipments (for example, output equipment 142, Peripheral Interface
144 and communication equipment 146) to basic configuration 102 via the communication of bus/interface controller 130 interface bus 140.Example
Output equipment 142 include GPU 148 and audio treatment unit 150.They can be configured to contribute to via
One or more A/V port 152 is communicated with the various external equipments of such as display or loudspeaker etc.Outside example
If interface 144 can include serial interface controller 154 and parallel interface controller 156, they can be configured to contribute to
Via one or more I/O port 158 and such as input equipment (for example, keyboard, mouse, pen, voice-input device, touch
Input equipment) or the external equipment of other peripheral hardwares (such as printer, scanner etc.) etc communicated.The communication of example sets
Standby 146 can include network controller 160, and it can be arranged to be easy to via one or more COM1 164 and
The communication that individual or multiple other computing devices 162 pass through network communication link.
Network communication link can be an example of communication media.Communication media generally can be presented as in such as carrier wave
Or computer-readable instruction, data structure, the program module in the modulated data signal of other transmission mechanisms etc, and can
With including any information delivery media." modulated data signal " can be with such signal, in its data set or many
It is individual or it change can the mode of coding information in the signal carry out.Used as nonrestrictive example, communication media can be with
It is including the wire medium of such as cable network or private line network etc and such as sound, radio frequency (RF), microwave, infrared
Or other wireless mediums are in interior various wireless mediums (IR).Term computer-readable medium used herein can include depositing
Both storage media and communication media.
Computing device 100 can be implemented as a part for portable (or mobile) electronic equipment of small size, and these electronics set
The standby personal computer that can include desktop computer and notebook computer configuration.Computing device is also implemented as service
Device.Server can for example be configured to a node in the group system for process mass data.
In a typical application scenarios, computing device 100 needs to carry out matching operation to multitude of video data.Accordingly
Ground, computing device 100 can perform the method (for example, hereinafter method 200 or 500) of matching video.Can include using 122
The application (for example hereinafter applying 600) of matching video.
Fig. 2 shows the flow chart of the method 200 of matching video according to some embodiments of the invention.Method 200 is suitable to
Perform in various computing devices (100).
As shown in Fig. 2 method 200 starts from step S210.In step S210, the first video and second to be matched is obtained
Video.Here, the first video and the second video are picture frame sequence.First video and the second video can be RGB or YUV etc.
Various pixel formats.In order to the time span for carrying out the sequence of frames of video of matching operation, the first video and the second video can phase
Same (or roughly the same), but not limited to this.It should be noted that the first and second videos acquired in step S210 can be with
It is the Internet video bag of the video data, or real-time reception stored in computing device, the present invention does not do excessive to this
Limit.
For the first and second videos obtained in step S210, execution step S220 of method 200.
In step S220, respectively centered on each pixel in the first video and the second video per two field picture, choosing
Take the image block of predetermined window size on this pixel place picture frame (i.e. this frame).In addition, step S220 also chooses the picture
Before and after vegetarian refreshments in adjacent multiframe additional space position image block.Here, the image block of additional space position generally with this frame on
Tile size is identical.So, step S220 can be using selected image block on this frame and in front and back adjacent multiframe as this
The corresponding video block of individual pixel.Obviously, the video block for obtaining in this step is that a 3-D view block (is different from prior art
Middle two dimensional image block).Vivider explanation is carried out to video block in step S220 below in conjunction with Fig. 3.
Fig. 3 shows the schematic diagram of picture frame sequence according to an embodiment of the invention.Fig. 3 shows continuous T-1、
T0And T1Three two field picture frames.With T0Upper pixel a0As a example by, pixel a0Corresponding video block includes image block B0、B-1And B1.Its
In, B0It is with pixel a1Centered on neighborhood block.It should be noted that the present invention does not do excessive restriction to the window size of neighborhood block.Separately
Outward, B-1And B1For picture frame T0(i.e. pixel a0This frame being located) it is adjacent before and after corresponding image block on frame.Here, B-1With
B1Can be with B0It is equivalently-sized.B-1(B1) central pixel point a-1(a1) relative to T-1(T1) locus and pixel a0Phase
For T0Locus it is consistent.Although illustrate only T in Fig. 3-1And T1, but not limited to this, in an embodiment of the present invention, depending on
Frequency block can also include the image of additional space position on more consecutive frames in front and back (that is, the picture frame in the range of longer time)
Block.In addition, for the pixel of image frame border, step S220 can adopt filling mode completion image block, no longer go to live in the household of one's in-laws on getting married here
State.
Acquired with regard to the first video and the video block of the second video in step S220, method 200 passes through step
S230 carries out data conversion process.In step S230, video block respectively to the first video and the video block of the second video are held
Row Walsh-Hadanjard Transform (Walsh Hadamard transform), will at least partially to regard in each video block
Frequency feature focuses on predetermined dimension.
As described above, each video block includes multiple images block.Each image block includes multiple pixels.Therefore, each
Video block is a multidimensional data.The value of each dimension is, for example, the brightness of a pixel or colourity, but not limited to this.Through
The video block of Walsh-Hadanjard Transform, its scattered video feature information is centralized.Typically, most of video is special
In collection after the conversion in the predetermined dimensional extent of data.According to one embodiment of the invention, step S230 can be by following public affairs
Formula carries out data conversion:
Wherein, HnFor hadamard matrix, V is the matrix of video block to be transformed,It is the square of the video block through converting
Battle array.Middle characteristic value is largely focused in a small range region (i.e. predetermined dimension).
Subsequently, method 200 with execution step S240, can extract the data of the predetermined dimension of each conversion rear video block, and
Using extracted data as the corresponding characteristic information of this video block.To sum up, method 200 passes through step S230 and step S240,
By each video block carried out reduce dimension operation, and reduce the characteristic information after dimension contain conversion before video
Most of characteristic information of block.So, represent that the data volume of video block feature can be reduced greatly.
For the data high with each video block similarity in the second video of the search in the characteristic information of the first video, side
Also execution step S250 of method 200.In step s 250, a dimension in the characteristic information of the video block for selecting the first video, and
Based on selected dimension set up all video blocks with regard to the first video characteristic information K Wei Shu (Kd-tree).According to this
The operation of tree construction is set up in bright one embodiment, in step s 250 recurrence execution, until first degree left subtree and lowermost level
Each node quantity is less than threshold value to right subtree.
Wherein, the operation for setting up tree construction for performing every time includes following processes.For belonging to the tree construction to be set up
The characteristic information of video block, first using selected dimension for the video block of intermediate value characteristic information as setting up tree construction
Root node.Then, the left subtree of the root node will be assigned to less than the characteristic information of the intermediate value in the dimension.In addition, will be at this
Dimension is assigned to the right subtree of the root node more than the characteristic information of the intermediate value.It should be noted that subtree mentioned here is
Set comprising at least one characteristic information.In order to K Wei Shu set up process in the vivider explanation present invention, with reference to figure
4A-4B is illustrative.
Fig. 4 A-4B are shown according to the schematic diagram that K dimension tree processes are set up in one embodiment of the invention.Such as Fig. 4 A and 4B institutes
Show, the characteristic information corresponding to the first video includes Q1、Q2、Q3、Q4、Q5、Q6、Q7、Q8And Q9.Here, Q1To Q9In each feature
Information includes multiple dimensions.Q1To Q9It is respectively 1,2,3,4,5,6,7,8 and 9 in the value of selected dimension.Fig. 4 A show step
S250 performs for the first time the result for setting up tree construction operation.In Figure 4 A, the characteristic information Q corresponding to intermediate value 55It is chosen to be root
Node.Q1To Q4It is assigned to Q5Left subtree, Q6To Q9It is assigned to Q5Right subtree.Threshold value is, for example, 3 in the present embodiment, but
Not limited to this.Q5Left and right subtree in each node quantity be 4 (i.e. 4 characteristic informations).Therefore, step S250 will also continue to hold
Row recursive operation.Fig. 4 B show that step S250 performs the result for setting up tree construction operation second.In this operation, to Q1
To Q4Set up tree construction and to Q6To Q9Set up tree construction.Q3For root node.Q1And Q2For Q3Left subtree.Q4For Q3Right subtree.It is similar
Ground, Q8For root node.Q6And Q7For Q8Left subtree.Q9For Q8Right subtree.Obviously, Q3And Q8The number of nodes of respective subtree is little
In 3.Therefore, step S250 needs not continue to perform recursive operation.It should be noted that root node Q5Left and right subtree be
1 grade of subtree.Root node Q3And Q8Respective subtree is the 2nd grade of subtree.In the embodiment shown in Fig. 4 B, the 2nd grade of subtree is minimum
Level subtree (being referred to as leaf).
On the basis of the K dimension trees that step S250 is set up, execution step S260 of method 200.It is every for the second video
The characteristic information of individual video block to be matched, step S260 is from search in the K dimension trees set up and its similarity highest feature letter
Breath is used as its corresponding matching characteristic information.In an embodiment in accordance with the invention, the feature based on video block to be matched
Information, step S260 recurrence performs the operation for selecting subtree, is one or minimum in first degree left subtree until selecting subtree
One in the right subtree of level.Wherein, the operation for selecting subtree is performed every time including following processes.First, it is determined that currently selected fixed
Whether the root node of tree is more than the characteristic information to be matched in selected dimension.When more than the characteristic information to be matched,
The left subtree of selected present tree.When less than the characteristic information to be matched, the right subtree of present tree is selected.
When first degree subtree (leaf) is selected, step S260 calculates the characteristic information of the video block to be matched and institute
The similarity of each node in selected first degree subtree, and using similarity highest node as this video block to be matched
Matching characteristic information.For the implementation procedure of vivider explanation step S260, below with video block in the second video
Characteristic information M1As a example by it is illustrative.For example, M1It is 2.5 in the value for selecting dimension.Step S260 is performing choosing for the first time
During the operation of stator tree, compare M1With Q5In the size for selecting dimension.M1Q is less than on dimension is selected5, therefore this operation choosing
Determine Q5Left subtree.That is, with Q3For the tree of root node.Subsequently, step S260 continues executing with the operation of selected subtree.At second
In the operation of selected subtree, M1Q is less than on dimension is selected3.Therefore, Q is selected in this operation3Left subtree.Here, Q3A left side
Subtree (Q1And Q2) for K Wei Shu lowermost level subtree.Therefore, step S260 no longer performs subtree and selectes behaviour after operating at second
Make.
After selected lowermost level subtree, step S260 continues the characteristic information M for judging video block to be matched1With select
Each node (that is, Q in lowermost level subtree1And Q2) similarity.Here, step S260 can be calculated in the various formula modes of selection
Similarity between two characteristic informations.For example, step S260 can be by calculating the distance between two characteristic informations come really
Determine similarity.Distance more modern age table similarity is higher.The distance between two characteristic informations for example can be Euclidean distance, geneva
Distance or Minkowski Distance etc., but not limited to this.By taking Euclidean distance as an example, step S260 can be carried out according to following manner
Distance calculates operation.
Wherein, p represents the characteristic information of video block to be matched, and q represents a knot in selected first degree subtree
Point, p and q are N-dimensional vector.
To sum up, it is of the invention matching video method 200 can set up 3 D video block (i.e. video block include one
Relevant information of the pixel on field and time dimension), and by Walsh-Hadanjard Transform by each video block
Characteristic information carries out centralization in multiple dimensions.On this basis, method 200 can carry out dimensionality reduction operation to video block, and lead to
Cross the characteristic information of low dimensional to characterize original high-dimensional video block.Further, the method 200 of matching video of the invention can
By the set of the characteristic information of a video (i.e. above the first video) to be searched, to be configured to K Wei Shu.So, method
200 rapidly can scan for K Wei Shu, and determine and video block (i.e. the video block of the second video) similarity to be matched
A higher range of convergence (i.e. above selected lowermost level subtree).Further, method 200 can be from the range of convergence
In quick one node of characteristic information similarity highest selected with video block to be matched, it is and special using the node as matching
Reference ceases.
Fig. 5 shows the flow chart of the method 500 of the matching video according to yet other embodiments of the invention.As shown in figure 5,
Method 500 includes step S510, S520, S530, S540, S550 and S560.Here, the embodiment of step S510 to S560 point
Not Dui Ying above step S210 to S260, repeat no more here.
Alternatively, method 500 can also include step S570, based in video block respective pixel neighborhood of a point to be matched extremely
Few one part of pixel point, corresponding lowermost level subtree, the characteristic information that the video block to be matched is calculated successively is corresponding with this
The similarity of node in lowermost level subtree.It is corresponding more than current matching characteristic information in this highest similarity obtained by calculating
During similarity, matching characteristic information (that is, the most like characteristic information determined in step S560) is updated in step S270
For the corresponding node of this highest similarity.
In addition, method 500 can be on the basis of step S560 or step S570, continuing executing with step S580.It is right
In the video block that matching characteristic information is determined in step S560 or S570, step S580 can be based on video block to be matched
Before and after (video block is determined matching characteristic information in step S560 or S570) corresponding pixel points in consecutive frame extremely
The corresponding lowermost level subtree of pixel of the relevant position of a few frame, calculates successively the characteristic information and the lowermost level of the video block
The similarity of node in subtree.It is more than current matching characteristic information (i.e. in step S560 in this highest similarity for calculating
Or determine matching characteristic information in S570) corresponding similarity when, matching characteristic information is updated in step S580 for this
The corresponding node of highest similarity obtained by secondary calculating.
To sum up, method 500 is by the relevant position on Qian Hou consecutive frame or the corresponding lowermost level subtree of neighborhood territory pixel point
Middle search and characteristic information similarity highest node to be matched, can further improve matching accuracy.
Fig. 6 shows the schematic diagram of the application 600 of matching video according to some embodiments of the invention.It is suitable to using 600
In residing in computing device (100).
As shown in fig. 6, including acquiring unit 610, blocking unit 620, feature extraction unit 630, K dimension tree structure using 600
Build unit 640 and matching unit 650.
Acquiring unit 610 is suitable to obtain the first video and the second video to be matched.Acquiring unit 610 is more specifically implemented
Mode is consistent with above step S510, repeats no more here.
Centered on each pixel that blocking unit 620 neutralizes in the second video, per two field picture by the first video respectively, choosing
Take the image block of predetermined window size on this frame, and the image for choosing additional space position in adjacent multiframe before and after the pixel
Block.In addition, blocking unit 620 regards selected image block from this frame and in front and back adjacent multiframe as the pixel is corresponding
Frequency block.The more specifically embodiment of blocking unit 620 is consistent with above step S520, repeats no more here.
Feature extraction unit 630 is suitable to the video block of video block respectively to the first video and the second video, performs Wal
Assorted-Hadamard transform, so that at least a portion video features in each video block are focused on into predetermined dimension.On this basis,
Feature extraction unit 630 extracts the characteristic information of the information as the video block of the predetermined dimension of each transformed video block.
In an embodiment in accordance with the invention, feature extraction unit 630 can carry out Walsh-Hadanjard Transform according to following formula
Conversion.
Wherein, HnFor hadamard matrix, V is the matrix of video block to be transformed,It is the square of the video block through converting
Battle array.The more specifically embodiment of feature extraction unit 630 is consistent with above step S530 and S540, repeats no more here.
K dimension tree construction units 640 are suitable to a dimension in the characteristic information of the video block for selecting the first video, and based on institute
Selected dimension sets up the K Wei Shu (Kd-tree) of the characteristic information of all video blocks with regard to the first video.
In an embodiment in accordance with the invention, K dimensions tree construction unit 640 recurrence performs the operation for setting up tree construction, directly
To first degree left subtree and lowermost level right subtree, each node quantity is respectively less than threshold value.Wherein, foundation tree performed every time
Structure operation process is as follows.
Firstly, for the characteristic information of the video block for belonging to the tree construction to be set up, K dimension tree construction units 640 will be in institute
Selected dimension is the characteristic information of the video block of intermediate value as the root node that set up tree construction.Then, K dimensions tree construction unit
The characteristic information that the intermediate value is less than in the dimension is assigned to the left subtree of the root node and will be more than the intermediate value in the dimension by 640
Characteristic information be assigned to the right subtree of the root node.K dimension trees construction unit 640 more specifically embodiment and above step
S550 is consistent, repeats no more here.
For the characteristic information of each video block to be matched of the second video, matching unit 650 is suitable to from the K dimensions set up
Search for its similarity highest characteristic information as its corresponding matching characteristic information in tree.According to one enforcement of the present invention
In example, matching unit 650 is sub until selecting to the characteristic information of video block to be matched, the operation of the selected subtree of recurrence execution
Set as one in one in first degree left subtree or first degree right subtree.
Wherein, the operation for selecting subtree is performed every time including following processes.Matching unit 650 judges currently selected to determine tree
Whether root node is more than the characteristic information to be matched in selected dimension.When more than the characteristic information to be matched, matching
Unit 650 selectes the left subtree (it is left subtree to update current selected tree) of present tree.Less than the characteristic information to be matched
When, matching unit 650 selectes the right subtree (it is right subtree to update current selected tree) of present tree.
It is determined that after first degree subtree, matching unit 650 calculate the characteristic information of the video block to be matched with it is selected
The similarity of each node in fixed first degree subtree, and using similarity highest node as the matching characteristic information.
Here, matching unit 650 for example can determine the similarity of the two by calculating the distance of two characteristic informations.Specifically,
Matching unit 650 is for example according to following formula characteristic information for calculating video block to be matched and the first degree subtree selected
In any one node Euclidean distance:
Wherein, p represents the characteristic information of video block to be matched, and q represents a knot in selected first degree subtree
Point, p and q are N-dimensional vector.On this basis, matching unit 650 determines p's and q according to the calculated Euclidean distance of institute
Similarity.In addition, matching unit 650 can also be calculated using the various known distances such as mahalanobis distance or Minkowski Distance
Mode, repeats no more here.
Alternatively, based at least a portion pixel in video block respective pixel neighborhood of a point to be matched, corresponding minimum
Level subtree, matching unit 650 can also be calculated successively in characteristic information lowermost level subtree corresponding with this of video block to be matched
The similarity of node.In this highest similarity obtained by calculating similarity corresponding more than current matching characteristic information,
It is the corresponding node of this highest similarity to update the matching characteristic information with unit 650.
In addition, based on an at least frame, additional space position in adjacent multiframe before and after video block corresponding pixel points to be matched
The corresponding lowermost level subtree of upper pixel, matching unit 650 can also calculate successively the characteristic information of the video block with it is corresponding
Lowermost level subtree in node similarity.In this highest similarity obtained by calculating more than current matching characteristic information correspondence
Similarity when, matching unit 650 update the matching characteristic information be the corresponding node of this highest similarity.Matching unit
650 more specifically embodiments are consistent with above step S560, S570 and S580, repeat no more here.
A10, the application as described in A8 or A9, wherein, the K dimensions tree construction unit is suitable to be performed according to following manner and is based on
Selected dimension set up all video blocks with regard to the first video characteristic information K Wei Shu (Kd-tree) operation:Recurrence is held
Row sets up the operation of tree construction, and until all first degree left subtrees and lowermost level right subtree, each node quantity is respectively less than threshold
Value, wherein, the performed operation for setting up tree construction every time includes:For the feature of the video block for belonging to the tree construction to be set up
Information, using selected dimension for the video block of intermediate value characteristic information as the root node that set up tree construction, and will be at this
Dimension is assigned to the left subtree of the root node and will believe more than the feature of the intermediate value in the dimension less than the characteristic information of the intermediate value
Breath is assigned to the right subtree of the root node.A11, the application as described in A10, wherein, for each of second video is treated
The characteristic information of the video block matched somebody with somebody, the matching unit is suitable to be tieed up from the K with regard to the first video for being set up according to following manner
Search for its similarity highest characteristic information as its corresponding matching characteristic information in tree:Spy to video block to be matched
Reference breath, recurrence perform select subtree operation, until select subtree be the first degree left subtree in one or it is described most
One in rudimentary right subtree, wherein, the operation of selected subtree is performed every time to be included:Judge that the currently selected root node for determining tree exists
Selected dimension whether be more than the characteristic information to be matched, when more than the characteristic information to be matched, select present tree
Left subtree, when less than the characteristic information to be matched, selectes the right subtree of present tree;And calculate the video block to be matched
Characteristic information and each node in the first degree subtree selected similarity, and using similarity highest node as institute
State matching characteristic information.A12, the application as described in A11, wherein, the matching unit is suitable to calculate this according to following manner to be treated
The operation of the characteristic information of the video block of matching and the similarity of each node in the first degree subtree selected:According to following
The Euclidean distance of the characteristic information that formula calculates video block to be matched and any one node in the first degree subtree selected:
Wherein, p represents the characteristic information of video block to be matched, and q represents a knot in selected first degree subtree
Point, p and q are N-dimensional vector;According to the calculated Euclidean distance of institute, the similarity of p and q is determined.A13, such as the institutes of A11 or 12
The application stated, the matching unit is further adapted for:Based at least a portion in the video block respective pixel neighborhood of a point to be matched
Pixel, corresponding lowermost level subtree, calculate successively characteristic information lowermost level corresponding with this of the video block to be matched
The similarity of node in tree, in this highest similarity obtained by calculating the current corresponding similarity of matching characteristic information is more than
When, the matching characteristic information is updated for the corresponding node of this highest similarity.A14, answering as any one of A11-13
With the matching unit is further adapted for:Based on an at least frame in adjacent multiframe before and after the video block corresponding pixel points to be matched,
The corresponding lowermost level subtree of pixel on additional space position, calculates successively the characteristic information of the video block minimum with corresponding
The similarity of node in level subtree, it is corresponding similar more than current matching characteristic information in this highest similarity obtained by calculating
When spending, the matching characteristic information is updated for the corresponding node of this highest similarity.
In specification mentioned herein, a large amount of details are illustrated.It is to be appreciated, however, that the enforcement of the present invention
Example can be put into practice in the case of without these details.In some instances, known method, knot is not been shown in detail
Structure and technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help understand one or more in each inventive aspect, exist
Above in the description of the exemplary embodiment of the present invention, each feature of the present invention is grouped together into single enforcement sometimes
In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor
The feature more features that the application claims ratio of shield is expressly recited in each claim.More precisely, as following
As claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore, abide by
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
As the separate embodiments of the present invention.
Those skilled in the art should be understood the module or unit or group of the equipment in example disclosed herein
Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example
In one or more different equipment.Module in aforementioned exemplary can be combined as a module or be segmented in addition multiple
Submodule.
Those skilled in the art are appreciated that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment
Unit or component are combined into a module or unit or component, and can be divided in addition multiple submodule or subelement or
Sub-component.In addition at least some in such feature and/or process or unit is excluded each other, can adopt any
Combine to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so disclosed
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Profit is required, summary and accompanying drawing) disclosed in each feature can it is identical by offers, be equal to or the alternative features of similar purpose carry out generation
Replace.
Although additionally, it will be appreciated by those of skill in the art that some embodiments described herein include other embodiments
In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment required for protection appoint
One of meaning can in any combination mode using.
Additionally, some heres in the embodiment be described as can be by the processor of computer system or by performing
The combination of method or method element that other devices of the function are implemented.Therefore, with for implementing methods described or method
The processor of the necessary instruction of element forms the device for implementing the method or method element.Additionally, device embodiment
Element described in this is the example of following device:The device is used to implement by performed by the element for the purpose for implementing the invention
Function.
As used in this, unless specifically stated so, come using ordinal number " first ", " second ", " the 3rd " etc.
Description plain objects are merely representative of and are related to the different instances of similar object, and are not intended to imply that the object being so described must
Must have the time it is upper, spatially, sequence aspect or given order in any other manner.
Although describing the present invention according to the embodiment of limited quantity, above description, the art are benefited from
It is interior it is clear for the skilled person that in the scope of the present invention for thus describing, it can be envisaged that other embodiments.Additionally, it should be noted that
Language used in this specification primarily to readable and teaching purpose and select, rather than in order to explain or limit
Determine subject of the present invention and select.Therefore, in the case of without departing from the scope of the appended claims and spirit, for this
Many modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to this
The done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.
Claims (10)
1. a kind of method of matching video, is suitable to perform in the terminal, and the method includes:
Obtain the first video and the second video to be matched;
Respectively centered on each pixel in the first video and the second video, per two field picture, predetermined window on this frame is chosen
The image block of size, and choose the image block of additional space position in adjacent multiframe before and after the pixel, and will be from this frame and front
Afterwards image block selected in adjacent multiframe is used as the corresponding video block of the pixel;
Video block respectively to the first video and the video block of the second video, perform Walsh-Hadanjard Transform, so as to by each
At least a portion video features focus on predetermined dimension in video block;
Extract each transformed video block predetermined dimension information as the video block characteristic information;
A dimension in the characteristic information of the video block for selecting the first video, and set up with regard to the first video based on the selected dimension of institute
All video blocks characteristic information K Wei Shu (Kd-tree);And
For the characteristic information of each video block to be matched of the second video, from search and its similarity in the K dimension trees set up
Highest characteristic information is used as its corresponding matching characteristic information.
2. the method for claim 1, wherein video of the video block respectively to the first video and the second video
Block, performs Walsh-Hadanjard Transform, so that at least a portion video features in each video block are focused on into predetermined dimension
Step includes:
For each video block to be transformed, according to following formula line translation is entered:
Wherein, HnFor hadamard matrix, V is the matrix of video block to be transformed,It is the matrix of the video block through converting.
3. method as claimed in claim 1 or 2, wherein, it is described based on selected dimension set up with regard to all of the first video
The operation of the K Wei Shu (Kd-tree) of the characteristic information of video block includes:
Recurrence is performed sets up the operation of tree construction, until the respective nodal point number of all first degree left subtrees and first degree right subtree
Amount is less than threshold value,
Wherein, the operation for setting up tree construction performed every time includes:
For the characteristic information of the video block for belonging to the tree construction to be set up, by the spy of the video block that selected dimension is intermediate value
Reference is ceased as the root node that set up tree construction,
And be somebody's turn to do being assigned to the left subtree of the root node and will be more than in the dimension less than the characteristic information of the intermediate value in the dimension
The characteristic information of intermediate value is assigned to the right subtree of the root node.
4. method as claimed in claim 3, wherein, the feature letter of each video block to be matched for the second video
Breath, from search in the K dimension trees with regard to the first video set up with its similarity highest characteristic information as its corresponding
The step of with characteristic information, includes:
Characteristic information, recurrence to video block to be matched performs the operation for selecting subtree, is described minimum until selecting subtree
In the left subtree of level one in one or the first degree right subtree,
Wherein, the operation of selected subtree is performed every time to be included:
Judge whether the currently selected root node for determining tree is more than the characteristic information to be matched in selected dimension,
When more than the characteristic information to be matched, the left subtree of present tree is selected, and
When less than the characteristic information to be matched, the right subtree of present tree is selected;And
Calculate the similarity of the characteristic information to be matched and each node in the first degree subtree selected, and by similarity
Highest node is used as the matching characteristic information.
5. method as claimed in claim 4, wherein, the characteristic information for calculating the video block to be matched with select
The operation of the similarity of each node in first degree subtree includes:
According to any one knot in following formula characteristic information for calculating video block to be matched and the first degree subtree selected
The Euclidean distance of point:
Wherein, p represents the characteristic information of video block to be matched, and q represents a node in selected first degree subtree, p
N-dimensional vector is with q;
According to the calculated Euclidean distance of institute, the similarity of p and q is determined.
6. the method as described in claim 4 or 5, also includes:
Based at least a portion pixel, corresponding lowermost level subtree in the video block respective pixel neighborhood of a point to be matched,
The similarity of node in characteristic information lowermost level subtree corresponding with this of the video block to be matched is calculated successively,
In this highest similarity obtained by calculating similarity corresponding more than current matching characteristic information, the matching is updated special
Reference breath is the corresponding node of this highest similarity.
7. the method as any one of claim 4-6, also includes:
Based on pixel on an at least frame in adjacent multiframe before and after the video block corresponding pixel points to be matched, additional space position
The corresponding lowermost level subtree of point, calculates successively the characteristic information and node in corresponding lowermost level subtree of the video block to be matched
Similarity,
In this highest similarity obtained by calculating similarity corresponding more than current matching characteristic information, the matching is updated special
Reference breath is the corresponding node of this highest similarity.
8. a kind of application of matching video, is suitable to reside in computing device, and the application includes:
Acquiring unit, is suitable to obtain the first video and the second video to be matched;
Blocking unit, is suitable to respectively centered on each pixel in the first video and the second video, per two field picture, choose this
The image block of predetermined window size on frame, and the image block of additional space position in adjacent multiframe before and after the pixel is chosen, and
Using selected image block from this frame and in front and back adjacent multiframe as the corresponding video block of the pixel;
Feature extraction unit, is suitable to the video block of video block respectively to the first video and the second video, performs Walsh-hada
Hadamard transform, so that at least a portion video features in each video block are focused on into predetermined dimension, and it is transformed to extract each
Characteristic information of the information of the predetermined dimension of video block as the video block;
K dimension tree construction units, a dimension in the characteristic information of the video block for being suitable to select the first video, and based on selecting dimension
Degree sets up the K Wei Shu (Kd-tree) of the characteristic information of all video blocks with regard to the first video;And
Matching unit, for the characteristic information of each video block to be matched of the second video, is suitable to from the K dimension trees set up
Search is with its similarity highest characteristic information as its corresponding matching characteristic information.
9. application as claimed in claim 8, wherein, the feature extraction unit is suitable to be regarded to first respectively according to following manner
The video block of the video block of frequency and the second video, performs Walsh-Hadanjard Transform, so as to by least one in each video block
Video features are divided to focus on predetermined dimension:
For each video block to be transformed, according to following formula line translation is entered:
Wherein, HnFor hadamard matrix, V is the matrix of video block to be transformed,It is the matrix of the video block through converting.
10. a kind of computing device, including:The application of matching video as claimed in claim 8 or 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610889659.6A CN106649505B (en) | 2016-10-12 | 2016-10-12 | Method, application and computing device for matching videos |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610889659.6A CN106649505B (en) | 2016-10-12 | 2016-10-12 | Method, application and computing device for matching videos |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106649505A true CN106649505A (en) | 2017-05-10 |
CN106649505B CN106649505B (en) | 2020-04-07 |
Family
ID=58855690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610889659.6A Active CN106649505B (en) | 2016-10-12 | 2016-10-12 | Method, application and computing device for matching videos |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106649505B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2455316A (en) * | 2007-12-04 | 2009-06-10 | Sony Corp | Methods for matching pose of synthesized and captured images of human or animal bodies via gait phase, and generating a 3D scene using body tracking and pose. |
CN103985114A (en) * | 2014-03-21 | 2014-08-13 | 南京大学 | Surveillance video person foreground segmentation and classification method |
CN103997592A (en) * | 2014-05-29 | 2014-08-20 | 广东威创视讯科技股份有限公司 | Method and system for video noise reduction |
US9196021B2 (en) * | 2013-05-29 | 2015-11-24 | Adobe Systems Incorporated | Video enhancement using related content |
-
2016
- 2016-10-12 CN CN201610889659.6A patent/CN106649505B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2455316A (en) * | 2007-12-04 | 2009-06-10 | Sony Corp | Methods for matching pose of synthesized and captured images of human or animal bodies via gait phase, and generating a 3D scene using body tracking and pose. |
US9196021B2 (en) * | 2013-05-29 | 2015-11-24 | Adobe Systems Incorporated | Video enhancement using related content |
CN103985114A (en) * | 2014-03-21 | 2014-08-13 | 南京大学 | Surveillance video person foreground segmentation and classification method |
CN103997592A (en) * | 2014-05-29 | 2014-08-20 | 广东威创视讯科技股份有限公司 | Method and system for video noise reduction |
Also Published As
Publication number | Publication date |
---|---|
CN106649505B (en) | 2020-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3889836A1 (en) | Image description information generation method and device, and electronic device | |
CN111192292B (en) | Target tracking method and related equipment based on attention mechanism and twin network | |
AU2013403805B2 (en) | Mobile video search | |
EP3937124A1 (en) | Image processing method, device and apparatus, and storage medium | |
EP3779891A1 (en) | Method and device for training neural network model, and method and device for generating time-lapse photography video | |
US9697592B1 (en) | Computational-complexity adaptive method and system for transferring low dynamic range image to high dynamic range image | |
US9466123B2 (en) | Image identification method, electronic device, and computer program product | |
CN109410131B (en) | Face beautifying method and system based on condition generation antagonistic neural network | |
CN109472764B (en) | Method, apparatus, device and medium for image synthesis and image synthesis model training | |
EP2359272A1 (en) | Method and apparatus for representing and identifying feature descriptors utilizing a compressed histogram of gradients | |
JP7089045B2 (en) | Media processing methods, related equipment and computer programs | |
CN110399826B (en) | End-to-end face detection and identification method | |
CN112488923A (en) | Image super-resolution reconstruction method and device, storage medium and electronic equipment | |
CN112950640A (en) | Video portrait segmentation method and device, electronic equipment and storage medium | |
Wang et al. | No-reference stereoscopic image quality assessment using quaternion wavelet transform and heterogeneous ensemble learning | |
CN117372782A (en) | Small sample image classification method based on frequency domain analysis | |
US7609885B2 (en) | System and method for effectively implementing a texture feature detector | |
CN116469172A (en) | Bone behavior recognition video frame extraction method and system under multiple time scales | |
JP2007157112A (en) | Method for recognizing iris by utilizing analysis of cumulative sum basis transition and apparatus thereof | |
Al-Falluji et al. | Single image super resolution algorithms: A survey and evaluation | |
CN106649505A (en) | Video matching method and application and computing equipment | |
CN111951171A (en) | HDR image generation method and device, readable storage medium and terminal equipment | |
JP2020181402A (en) | Image processing system, image processing method and program | |
CN111031390B (en) | Method for summarizing process video of outputting determinant point with fixed size | |
CN113128278A (en) | Image identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |