CN109409321A - A kind of determination method and device of camera motion mode - Google Patents

A kind of determination method and device of camera motion mode Download PDF

Info

Publication number
CN109409321A
CN109409321A CN201811327207.4A CN201811327207A CN109409321A CN 109409321 A CN109409321 A CN 109409321A CN 201811327207 A CN201811327207 A CN 201811327207A CN 109409321 A CN109409321 A CN 109409321A
Authority
CN
China
Prior art keywords
block
camera motion
motion mode
frame image
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811327207.4A
Other languages
Chinese (zh)
Other versions
CN109409321B (en
Inventor
刘思阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201811327207.4A priority Critical patent/CN109409321B/en
Publication of CN109409321A publication Critical patent/CN109409321A/en
Application granted granted Critical
Publication of CN109409321B publication Critical patent/CN109409321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a kind of determination method and devices of camera motion mode, and wherein method includes: that fisrt feature point set is determined from this frame image of video data;Determine that second feature point set, comparison frame image and this frame image belong to the image of different frame from the comparison frame image of video data;The characteristic point that fisrt feature point set and second feature point set are matched, as linked character point to collection;Linked character point is calculated to the geometric transformation mode of collection;Geometric transformation mode is determined as linked character point to the corresponding camera motion mode of collection;One of camera motion mode camera motion mode is determined as camera motion mode of this frame image relative to comparison frame image.The workload for reducing rough cut teacher in this way, improves work efficiency.Also, camera motion mode of this frame image relative to comparison frame image is automatically determined, cost of labor and time cost have been saved.

Description

A kind of determination method and device of camera motion mode
Technical field
The present invention relates to field of video processing, more particularly to a kind of determination method and device of camera motion mode.
Background technique
At present the video programs such as movie and television play in recording process can because different video camera seats in the plane, different camera angle and The different scapes factors such as not, generate a large amount of video source material.Scape be not since the camera lens of video camera is different at a distance from subject, And the difference for the range size for causing subject to be showed in video camera viewfinder, therefore different scape be not can be by changing What the motion state for the camera lens for becoming different seats in the plane obtained.The general camera motion side by rough cut classification video clip Formula, so that later period editing personnel carry out fine pruning to the video clip for needing fine pruning.To which later period editing personnel can be by same field The video clip of the camera motion mode of scape, fine pruning of putting together, realize with different language of lens express different mood with Various relationship is transmitted, spectators is also avoided and feels bored because camera lens is excessively single with repeating.
Above-mentioned rough cut is that preliminary selection is carried out to video source material, then carries out editing.This rough cut Process is to browse video source material by rough cut teacher first;Then useless video is gone out by rough cut teacher's initial option again Segment, such as the video clip unrelated with the content that video program to be showed;And by view useless in all video source materials Frequency segment is cut, and obtains remaining video clip, the video clip of fine pruning is needed as the later period, is finally determined by rough cut teacher Need the camera motion mode of the video clip of fine pruning;By the respective camera lens of video clip of rough cut teacher fine pruning as required Motion mode marks label corresponding with camera motion mode to the video clip for needing fine pruning.
Since the video programs such as movie and television play are in recording process, tens even up to a hundred may be had in one scenario Video camera seat in the plane, the video clip of the shooting one hour of each video camera seat in the plane can generate up to a hundred hours video source materials, this Sample rough cut teacher will browse up to a hundred hours video clips, and then really go out to need the camera motion side of the video clip of fine pruning Formula, heavy workload, working efficiency is low, to waste a large amount of time cost and cost of labor.
Summary of the invention
A kind of determination method and device for being designed to provide camera motion mode of the embodiment of the present invention, it is existing to solve There is rough cut teacher in technology to browse up to a hundred hours video clips, and then determines to need the camera lens of the video clip of fine pruning Motion mode, heavy workload, working efficiency is low, thus the technical issues of wasting a large amount of time cost and cost of labor.Specifically Technical solution is as follows:
In a first aspect, present invention implementation provides a kind of determination method of camera motion mode, which comprises
Fisrt feature point set is determined from this frame image of video data;
Second feature point set, the comparison frame image and described frame are determined from the comparison frame image of the video data Image belongs to the image of different frame;
The characteristic point that the fisrt feature point set and the second feature point set are matched, as linked character point pair Collection;
The linked character point is calculated to the geometric transformation mode of collection;
The geometric transformation mode is determined as the linked character point to the corresponding camera motion mode of collection;
One of camera motion mode camera motion mode is determined as described frame image relative to described right Than the camera motion mode of frame image.
Further, fisrt feature point set is determined in this frame image from video data, comprising:
Uniform block division is carried out to described frame image, obtains more than two first blocks;
Feature extraction is carried out to first block, obtains the feature point set in first block;
Using the feature point set in first block as fisrt feature point set;
Second feature point set is determined in the comparison frame image from the video data, comprising:
Uniform block division is carried out to the comparison frame image, obtains corresponding second block of each first block;
Feature extraction is carried out to corresponding second block of each first block, obtains corresponding second block of each first block Feature point set;
Using the feature point set of corresponding second block of each first block as second feature point set;
The characteristic point that the fisrt feature point set and the second feature point set are matched, as linked character point To collection, comprising:
By second feature point set in the second block corresponding with first block of fisrt feature point set in each first block It is matched, obtains second feature point in the second block corresponding with first block of fisrt feature point set in each first block Collect the characteristic point to match;
By second feature point set in the second block corresponding with first block of fisrt feature point set in each first block The characteristic point to match, as linked character point to collection.
It is further, described to calculate the linked character point to the geometric transformation mode of collection, comprising:
The linked character point is calculated to concentration, fisrt feature point set is relative to described in each first block The affine transformation matrix of second feature point set in corresponding second block of one block;
It is described that the geometric transformation mode is determined as the linked character point to corresponding camera motion mode is collected, it wraps It includes:
The value of the first element in the affine transformation matrix is obtained, the value of first element is used to characterize the focal length of camera lens Or whether displacement moves;
Except the value of first element is in default focal range, determine the linked character point to collection Corresponding camera motion mode is the mode that the focal length of camera lens has moved;
In the case where the value of first element is greater than all numerical value in the default focal range, the camera lens The mode that focal length has moved is zoom-up;
In the case where the value of first element is less than all numerical value in the default focal range, the camera lens The mode that focal length has moved is to zoom.
Further, the method also includes:
In the case where the value of first element is in default focal range, determine the linked character point to collection pair The camera motion mode answered is the mode that the displacement of camera lens has moved.
Further, it is obtained in the affine transformation matrix before the value of the first element described, the method also includes:
Judge whether the value of each element in the affine transformation matrix meets default lawful condition, it is described to preset legal item Part is the value range for limiting the value of each element;
If the value of each element meets default lawful condition in the affine transformation matrix, executes and obtain the affine change The step of changing the value of the first element in matrix.
Further, before the calculating linked character point is to the geometric transformation mode of collection, the method is also wrapped It includes:
Judge whether the linked character point is greater than preset quantity to collection;
If the linked character point is greater than preset quantity to collection, executes and calculate the linked character point to the geometry of collection The step of mapping mode.
Further, in the case where the camera motion mode is more than two camera motion modes, it is described will be described One of camera motion mode camera motion mode is determined as camera lens of the described frame image relative to the comparison frame image Motion mode, comprising:
There is a kind of camera motion mode for meeting camera motion mode preset condition in the camera motion mode In the case of, it is opposite that a kind of camera motion mode for meeting the camera motion mode preset condition is determined as described frame image In the camera motion mode of the comparison frame image.
Further, the camera motion mode preset condition comprises determining that frequency of occurrence in the camera motion mode Greater than the camera motion mode of default frequency of occurrence.
Further, the camera motion mode preset condition comprises determining that frequency of occurrence in the camera motion mode Highest camera motion mode.
Further, one of camera motion mode camera motion mode is determined as described frame figure described After the camera motion mode relative to the comparison frame image, the method also includes:
It is noted for identifying the label of the camera motion mode in described frame image.
Second aspect, present invention implementation provide a kind of determining device of camera motion mode, comprising:
First obtains module, for determining fisrt feature point set from this frame of video data image;
Second obtains module, described right for determining second feature point set from the comparison frame image of the video data Than the image that frame image and described frame image belong to different frame;
Matching module, the characteristic point for the fisrt feature point set and the second feature point set to match, as Linked character point is to collection;
Computing module, for calculating the linked character point to the geometric transformation mode of collection;
Third obtains module, for the geometric transformation mode to be determined as the linked character point to the corresponding camera lens of collection Motion mode;
4th obtains module, for one of camera motion mode camera motion mode to be determined as described frame Camera motion mode of the image relative to the comparison frame image.
Further, described first module is obtained, is specifically used for:
Uniform block division is carried out to described frame image, obtains more than two first blocks;
Feature extraction is carried out to first block, obtains the feature point set in first block;
Using the feature point set in first block as fisrt feature point set;
Described second obtains module, is specifically used for:
Uniform block division is carried out to the comparison frame image, obtains corresponding second block of each first block;
Feature extraction is carried out to corresponding second block of each first block, obtains corresponding second block of each first block Feature point set;
Using the feature point set of corresponding second block of each first block as second feature point set;
The matching module, is specifically used for:
By second feature point set in the second block corresponding with first block of fisrt feature point set in each first block It is matched, obtains second feature point in the second block corresponding with first block of fisrt feature point set in each first block Collect the characteristic point to match;
By second feature point set in the second block corresponding with first block of fisrt feature point set in each first block The characteristic point to match, as linked character point to collection.
Further, the computing module, is specifically used for:
The linked character point is calculated to concentration, fisrt feature point set is relative to described in each first block The affine transformation matrix of second feature point set in corresponding second block of one block;
The third obtains module, is specifically used for:
The value of the first element in the affine transformation matrix is obtained, the value of first element is used to characterize the focal length of camera lens Or whether displacement moves;
Except the value of first element is in default focal range, determine the linked character point to collection Corresponding camera motion mode is the mode that the focal length of camera lens has moved;
In the case where the value of first element is greater than all numerical value in the default focal range, the camera lens The mode that focal length has moved is zoom-up;
In the case where the value of first element is less than all numerical value in the default focal range, the camera lens The mode that focal length has moved is to zoom.
Further, described device further include: the 5th obtains module, and the described 5th, which obtains module, is used at described first yuan In the case that the value of element is in default focal range, determine the linked character point to integrating corresponding camera motion mode as mirror The mode that the displacement of head has moved.
Further, described device further include: first judgment module, the first judgment module, in the acquisition In the affine transformation matrix before the value of the first element, judge whether the value of each element in the affine transformation matrix meets Default lawful condition, the default lawful condition is the value range for limiting the value of each element;
If the value of each element meets default lawful condition in the affine transformation matrix, executes and obtain the affine change The step of changing the value of the first element in matrix.
Further, described device further include: the second judgment module, second judgment module, in the calculating Before the linked character point is to the geometric transformation mode of collection, judge whether the linked character point is greater than preset quantity to collection;
If the linked character point is greater than preset quantity to collection, executes and calculate the linked character point to the geometry of collection The step of mapping mode.
Further, in the case where the camera motion mode is more than two camera motion modes, the described 4th is obtained Module is obtained, is specifically used for:
There is a kind of camera motion mode for meeting camera motion mode preset condition in the camera motion mode In the case of, it is opposite that a kind of camera motion mode for meeting the camera motion mode preset condition is determined as described frame image In the camera motion mode of the comparison frame image.
Further, described device further include: labeling module, the labeling module, for transporting the camera lens described One of flowing mode camera motion mode is determined as camera motion side of the described frame image relative to the comparison frame image After formula,
It is noted for identifying the label of the camera motion mode in described frame image.
The third aspect, present invention implementation provide a kind of electronic equipment, including processor, communication interface, memory and logical Believe bus, wherein processor, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes method and step described in first aspect.
Fourth aspect, present invention implementation provide a kind of computer readable storage medium, the computer-readable storage medium Instruction is stored in matter, when run on a computer, so that computer executes any method of above-mentioned first aspect.
5th aspect, present invention implementation additionally provides a kind of computer program product comprising instruction, when it is in computer When upper operation, so that computer executes any method of above-mentioned first aspect.
The determination method and device of a kind of camera motion mode provided in an embodiment of the present invention, from this frame figure of video data Fisrt feature point set is determined as in;Second feature point set is determined from the comparison frame image of video data, compares frame image and this Frame image belongs to the image of different frame;The characteristic point that fisrt feature point set and second feature point set are matched, it is special as association Sign point is to collection;Linked character point is calculated to the geometric transformation mode of collection;Geometric transformation mode is determined as linked character point to collection Corresponding camera motion mode;One of camera motion mode camera motion mode is determined as this frame image relative to comparison The camera motion mode of frame image.
It can be seen that using from the in the fisrt feature point set and comparison frame image in this frame of video data image The characteristic point that two feature point sets match can automatically calculate out linked character point to collection as linked character point to collection Geometric transformation mode is finally automatically derived camera motion mode of this frame image relative to comparison frame image.Compared to existing Technology does not need the video clip that rough cut teacher browses up to a hundred hours, to determine camera motion mode, to reduce preliminary The workload of editor, improves work efficiency.Also, automatically determine camera motion of this frame image relative to comparison frame image Mode has saved cost of labor and time cost.
Certainly, implement any of the products of the present invention or method it is not absolutely required at the same reach all the above excellent Point.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is required attached drawing in technical description to be briefly described.
Fig. 1 is the first pass schematic diagram of the determination method of camera motion mode provided in an embodiment of the present invention;
Fig. 2 provides the flow diagram that fisrt feature point set is determined from this frame image for the embodiment of the present invention;
Fig. 3 provides linked character point to the flow diagram of collection for the embodiment of the present invention;
Fig. 4 is the second procedure schematic diagram of the determination method of camera motion mode provided in an embodiment of the present invention;
Fig. 5 is that the linked character point of the embodiment of the present invention has occurred as the displacement of camera lens to integrating corresponding camera motion mode First schematic diagram of the mode of movement;
Fig. 6 is that the linked character point of the embodiment of the present invention has occurred as the displacement of camera lens to integrating corresponding camera motion mode Second schematic diagram of the mode of movement;
Fig. 7 is the schematic diagram that the mode that the focal length of camera lens provided in an embodiment of the present invention has moved is zoom-up;
Fig. 8 is that the mode that the focal length of camera lens provided in an embodiment of the present invention has moved is the schematic diagram to zoom;
Fig. 9 is the whole implementation process schematic diagram of the determination method of camera motion mode provided in an embodiment of the present invention;
Figure 10 is the third flow diagram of the determination method of camera motion mode provided in an embodiment of the present invention;
Figure 11 is the 4th flow diagram of the determination method of camera motion mode provided in an embodiment of the present invention;
Figure 12 is the specific implementation flow schematic diagram of the determination method of camera motion mode provided in an embodiment of the present invention;
Figure 13 is the first structure diagram of the determining device of the camera motion mode of the embodiment of the present invention;
Figure 14 is the second structural schematic diagram of the determining device of the camera motion mode of the embodiment of the present invention;
Figure 15 is the structural schematic diagram of electronic equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention is described.
Up to a hundred hours video clips are browsed for rough cut teacher in the prior art, and then determine to need fine pruning The camera motion mode of video clip, heavy workload, working efficiency is low, to waste a large amount of time cost and cost of labor Technical problem, the embodiment of the present invention provide a kind of determination method and device of camera motion mode, using following steps, determine this Camera motion mode of the frame image relative to comparison frame image:
Fisrt feature point set is determined from this frame image of video data;The is determined from the comparison frame image of video data Two feature point sets, comparison frame image and this frame image belong to the image of different frame;By fisrt feature point set and second feature point set The characteristic point to match, as linked character point to collection;Linked character point is calculated to the geometric transformation mode of collection;By geometric transformation Mode is determined as linked character point to the corresponding camera motion mode of collection;By one of camera motion mode camera motion mode It is determined as camera motion mode of this frame image relative to comparison frame image.
It can be seen that using from the in the fisrt feature point set and comparison frame image in this frame of video data image The characteristic point that two feature point sets match can automatically calculate out linked character point to collection as linked character point to collection Geometric transformation mode is finally automatically derived camera motion mode of this frame image relative to comparison frame image.Compared to existing Technology does not need the video clip that rough cut teacher browses up to a hundred hours, to determine camera motion mode, to reduce preliminary The workload of editor, improves work efficiency.Also, automatically determine camera motion of this frame image relative to comparison frame image Mode has saved cost of labor and time cost.
A kind of determination method of camera motion mode provided in an embodiment of the present invention is introduced first below.
A kind of determination method of camera motion mode provided by the embodiment of the present invention is applied to electronic equipment.Further , applied on the video player of electronic equipment, such as PC (personal computer, individual calculus generator terminal) for regarding The APP of video playing is used on the APP (Application, application program) and client that frequency plays.
Referring to Fig. 1, Fig. 1 is the first pass signal of the determination method of camera motion mode provided in an embodiment of the present invention Figure.The determination method of camera motion mode provided by the embodiment of the present invention, may include steps of:
Step 110, fisrt feature point set is determined from this frame image of video data.Step 120, from pair of video data Than determining that second feature point set, comparison frame image and described frame image belong to the image of different frame in frame image.
Wherein, in order to clearly demonstrate camera motion mode of this frame image relative to comparison frame image, from same The image of two frames or more is got in video data, images more than these two frames belongs to associated image, that is, these two There are identical characteristic points between image more than frame.It will be current it needs to be determined that relative to other frame figures in images more than two frames As a frame image of camera motion mode, referred to as this frame image, wherein other frame images are to remove this frame in the image of two frames or more Image other than image.Other frame images may be used as the camera motion mode to this frame image of breaking forth, by these other frame figures Picture, referred to as comparison frame image.That is, these comparison frame images can illustrate this frame image compared to the camera lens between itself Motion mode, illustratively, this frame image have been transported relative to the displacement that the camera motion mode of comparison frame image is camera lens Dynamic mode.Wherein, comparison frame image can be the image for having determined that camera motion mode, be also possible to not determine camera lens The image of motion mode, it is not limited here.
Wherein, above-mentioned video data may include the complete video of shooting, or the partial video of shooting.Illustratively, Video data can be video source material, and above-mentioned comparison frame image can be before this frame image in video source material adjacent one Frame image, or can be for this frame image in video source material before, and there are the one of preset interval frame between this frame image Frame image, or can be a frame image adjacent after this frame image in video source material, then can be video source In material after this frame image, and there are a frame images of preset interval frame between this frame image.This preset interval frame is Refer to and is spaced more than one frame between this frame image and above-mentioned comparison frame image.This preset interval frame can be according to user demand It is configured.This sample frame image and comparison frame image derive from the different frame image in same video source material, thus side Just camera motion mode of this frame image relative to comparison frame image is determined.
A kind of method of determination of fisrt feature point set, which can be, in above-mentioned steps 110 directly carries out feature to this frame image and mentions It takes, obtains the feature point set of this frame image;By the feature point set of this frame image, as fisrt feature point set so that it is convenient to directly Obtain fisrt feature point set.
Since camera motion mode does not move, only because in this frame image image content Partial Feature point It moves, for example, moving the characteristic point of personage.Therefore the picture in order to avoid moving Content influences final linked character point to each fisrt feature point set of concentration relative to the corresponding mirror of matched second feature point set Head motion mode, another method of determination of the above-mentioned fisrt feature point set of the embodiment of the present invention can be using following steps:
The first step carries out feature extraction to the first block, obtains the feature point set in the first block;Second step, by first Feature point set in block, as fisrt feature point set.
Shown in Figure 2, the above-mentioned first step can further include following steps 111 and step 112:
Step 111, uniform block division is carried out to this frame image, obtains more than two first blocks.
Wherein, above-mentioned frame image is subjected to uniform block division, obtains more than two blocks, these is located at this frame figure Block as in is known as the first block.These first blocks can with but be not limited to be by all image-regions of this frame image into All uniformly block divides row.Block division mode further can be whole areas that preset shape is carried out in this frame image Block divides.First block of more than two preset shapes available in this way.Further, the preset shape of these the first blocks It can be configured according to user demand.The preset shape of these the first blocks can be circle, or be also possible to it is rectangular, This is without limitation.For uniform block division for, the feature point set of all image-regions can be carried out using.And for For dividing according to whole blocks of preset shape, it can obtain carrying the block of feature point set from this frame image, in turn It is determined the feature point set of each first block, operand is reduced, improves the accuracy of operation.
Above-mentioned steps 111 can further include: this frame image in video data is generated numpy matrix A;By this frame The numpy matrix A of image carries out uniform block division, obtains the first block, i.e., I indicates the serial number of row, Indicate i value range beIt arrivesAnd i takesValue, do not take's Value, j indicate the serial number of column,Indicate j value range be It arrivesAnd And j takesValue, do not takeValue, h be image height, w be image width, hb be image transverse direction Number is divided, wb is the longitudinally divided number of image, AijIndicate the i-th row j column in the numpy matrix A generated by this frame image First block.
Step 112, feature extraction is carried out to the first block, obtains the feature point set in the first block, it will be in the first block Feature point set as fisrt feature point set.
Features described above extracting mode can refer to SHIF (Scale-invariant feature transform, detection office The algorithm of portion's feature), (Oriented FAST and Rotated BRIEF, swift nature point set extracts and the calculation of description by ORB Method), any feature extraction in SURF (Speeded Up Robust Features, accelerate robust feature).It is above-mentioned in this way Step 112, it can further include: using features described above extracting mode, to the first block AijFeature extraction is carried out, obtains first Feature point set in block, using the feature point set in the first block as fisrt feature point set.The first block can be extracted in this way Feature point set.
It is above-mentioned from this frame image determine fisrt feature point set method of determination in, this frame image uniform block is divided into In the case where first block, the quantity of block corresponding to the Partial Feature point of the image content moved is less than image content Partial Feature point quantity, the Partial Feature point for reducing image content in this way moves, to linked character point to concentration Influence of each fisrt feature point set relative to the corresponding camera motion mode of matched second feature point set, not only increases in this way Standard of the linked character point to each fisrt feature point set of concentration relative to the corresponding camera motion mode of matched second feature point set True property, and due to the limitation of the first block, feature extraction conveniently is carried out from the first block of this frame image, feature can be improved The extraction efficiency of point set.
Based on the explanation of above-mentioned fisrt feature point set, similarly, a kind of determination side of second feature point set in above-mentioned steps 120 Formula, which can refer to, directly carries out feature extraction to comparison frame image, obtains the feature point set of comparison frame image so that it is convenient to directly Obtain second feature point set.
Since camera motion mode does not move, only because in this frame image image content Partial Feature point It moves, for example, moving the characteristic point of personage.Therefore the picture in order to avoid moving Content influences final linked character point to each fisrt feature point set of concentration relative to the corresponding mirror of matched second feature point set Head motion mode, another method of determination of the above-mentioned second feature point set of the embodiment of the present invention can be using following steps:
Step 1 carries out feature extraction to the second block, obtains the feature point set in the second block;Step 2, by the secondth area Feature point set in block, as second feature point set.
Shown in Figure 3, above-mentioned step 1 can further include following steps 121 and step 122:
Step 121, uniform block division is carried out to comparison frame image, obtains corresponding second block of each first block.
Comparison frame image is subjected to uniform block division, more than two blocks is obtained, these is located in comparison frame image Block be known as the second block.The block division mode of second block and the division mode of the first block, make divided by the first block Different outer, the process of the block division mode of the second block and the block division side of the first block for the object of block division mode The process of formula is identical, can refer to the block division mode of the first block, details are not described herein.
Above-mentioned steps 121, can further include: the comparison frame image in video data is generated numpy matrix B;It will The numpy matrix B for comparing frame image carries out block division, obtains the second block, i.e.,Wherein, B is exactly the numpy of this frame image Matrix, BijIndicate corresponding second block of i-th row j the first block of column in the numpy matrix B generated by comparison frame image.
Step 122, feature extraction is carried out to corresponding second block of each first block, obtains each first block corresponding the The feature point set of two blocks, using the feature point set of corresponding second block of each first block as second feature point set.Wherein, The feature extraction mode of two blocks and the feature extraction mode of the first block, pair divided by the first block as feature extraction mode Outside as difference, the process of the feature extraction mode of the second block and the process of feature extraction mode of the first block are identical, Referring to the feature extraction mode of the first block, details are not described herein.Above-mentioned steps 122 in this way, can further include: in use Feature extraction mode is stated, second block B corresponding to each first blockijFeature extraction is carried out, it is corresponding to obtain each first block Second block BijFeature point set, by the corresponding second block B of each first blockijFeature point set as second feature point set. The feature point set of the second block can be extracted in this way.
Step 130, characteristic point fisrt feature point set and second feature point set to be matched, as linked character point pair Collection.
The fisrt feature point set characteristic point identical with the second feature point set in this frame image in frame image will be compared, claimed It is linked character point to collection.Linked character point to collection include: in each first block fisrt feature point and with the first block pair Each linked character point pair that the second feature point to match in the second block answered is formed.The characteristic point of fisrt feature point set Number is greater than or equal to three, and the number of second feature point set is greater than or equal to three, number of the linked character point to the characteristic point pair of collection Mesh is also greater than or equal to three.
This step 130 can be using following at least one but by the way of being not limited to seek linked character point to collection, further : by second feature point set progress in the second block corresponding with first block of fisrt feature point set in each first block Match, obtains second feature point set phase in the second block corresponding with first block of fisrt feature point set in each first block The characteristic point matched;By second feature point in the second block corresponding with first block of fisrt feature point set in each first block Collect the characteristic point to match, as linked character point to collection.
Wherein, seeking linked character point includes: that KNN (k-Nearest Neighbor, K arest neighbors) is calculated to the mode of collection Method.In order to improve the matched accuracy of feature point set in later period fisrt feature point set and the second block.Further, it uses KNN algorithm executes following steps, obtains linked character point to collection:
The first step carries out feature extraction to the second block, obtains the feature point set of the second block;Second step is calculated using KNN Method by the second block corresponding with first block of fisrt feature point set in each first block second feature point set carry out Match, obtains second feature point set phase in the second block corresponding with first block of fisrt feature point set in each first block The characteristic point matched;Third step, by the second block corresponding with first block of fisrt feature point set in each first block The characteristic point that two feature point sets match, as linked character point to collection.Fisrt feature point set and the secondth area can be improved in this way The matched accuracy of feature point set in block.
Step 140, linked character point is calculated to the geometric transformation mode of collection.Step 150, geometric transformation mode is determined as Linked character point is to the corresponding camera motion mode of collection.
Wherein, above-mentioned geometric transformation mode may include: that rotation mode, affine transformation mode, mirror-image fashion etc. are any Or it is two or more.Any linked character point that can determine the embodiment of the present invention is to each fisrt feature point set of concentration relative to matching Second feature point set geometric transformation mode, belong to the protection scope of the embodiment of the present invention, no longer illustrate one by one herein.
Since above-mentioned geometric transformation mode is different, there is also various situations, above-mentioned camera motions for camera motion mode Mode may include: the displacement of mode, camera lens that does not move of focal length of mode, camera lens that the focal length of camera lens has moved One or more of the mode that the displacement of the mode and camera lens that have moved does not move, wherein the coke of camera lens Include: zoom-up away from the mode moved and zooms.Specifically it is described in detail in conjunction with the following contents.
In conjunction with Fig. 1, the embodiment of the present invention is illustrated in affine transformation mode of geometric transformation mode.Institute referring to fig. 4 Show, this step 140 can calculate linked character point to the geometric transformation mode of collection using following steps:
Step 141, linked character point is calculated to concentration, and fisrt feature point set is relative to the firstth area in each first block The affine transformation matrix of second feature point set in corresponding second block of block.Before this step 141, the method also includes: By this frame image and comparison frame image, numpy matrix is generated respectively, the image of required processing first can be generated into numpy in this way Matrix, it is convenient directly to calculate affine transformation matrix.
Geometric transformation mode it is special can be determined as the association using following steps 151 to step 154 by this step 150 Sign point is to the corresponding camera motion mode of collection:
Step 151, the value of the first element in affine transformation matrix is obtained, the value of the first element is used to characterize the focal length of camera lens Or whether the displacement of camera lens moves.
Step 1521, it in the case that the value of the first element is in default focal range in affine transformation matrix, determines Linked character point is to the mode for integrating corresponding camera motion mode and having moved as the displacement of camera lens.
Wherein, above-mentioned default focal range can be needed according to user and industry needs to be configured.Default focal range Referring to following citing, but it is not limited to illustrate as follows, any camera motion for enabling to this frame image relative to comparison frame image The more accurate corresponding any default focal range of the determination of mode, belongs to the protection scope of the embodiment of the present invention.
The embodiment of the present invention can use the value that above-mentioned default focal range judges the first element in affine transformation matrix, Wherein, affine transformation matrix can be usedIt indicates, that is, AMijFor affine transformation matrix, α is affine transformation square Battle array AMijIn the 1st column element of the 1st row value, ∈ be affine transformation matrix AMijIn the 1st row the 2nd column element value, γ is Affine transformation matrix AMijIn the 1st row the 3rd column element value, ε be affine transformation matrix AMijIn the 2nd row the 1st column element Value, β be affine transformation matrix AMijIn the 2nd row the 2nd column element value, δ be affine transformation matrix AMijIn the 2nd row The value of the element of 3rd column.The element of affine transformation matrix includes above-mentioned all elements.Either element in above-mentioned all elements, It all can serve as the first element, the element whether displacement of any focal length that can indicate camera lens or camera lens moves belongs to The protection scope of the embodiment of the present invention.The embodiment of the present invention carries out as described below using element α as the first element.
Illustratively, default focal range is | α -1 | < 1.0 × 10-10.If the value of the first element is in default focal range Interior, then the focal length of camera lens does not move, that is, Zoom factors 0.Therefore illustrate that linked character point is to corresponding camera motion mode is collected The mode that the focal length of camera lens does not move, and linked character point has been sent out as the displacement of camera lens to integrating corresponding camera motion mode The mode of raw movement, direction of displacement angle Θ are as follows:Direction of displacement angle Θ table Show the angle to the right with image level.Displacement scale parameter are as follows:Wherein σ indicates mapping function.The present invention Mapping function in embodiment can be hyperbolic tangent function, or atanh function, it is any to can be realized this hair The mapping function of bright embodiment belongs to the protection scope of the embodiment of the present invention, no longer illustrates one by one herein.
The mode that the displacement of above-mentioned camera lens moves can be, but not limited to include: camera lens according to direction of displacement angle Θ into Line position shifting moves.Camera lens carries out displacement movement occurs may include: to direction of displacement angle according to direction of displacement angle Θ The direction of Θ is mobile.The mode that the displacement of specific camera lens moves is exemplified below, but not limited to this.It is shown in Figure 5, in advance If interval frame is 1 frame, that is, comparison frame image is the adjacent previous frame image of this frame image.Each ringlet central point indicates in Fig. 5 The linked character point pair that the characteristic point of this frame image and the characteristic point of comparison frame image match, arrow direction indicate direction of displacement Angle Θ, direction of displacement angle Θ are -180 degree.It is possible thereby to see, camera lens carries out displacement according to direction of displacement angle Θ and transports Dynamic can be to be displaced to the left.
Shown in Figure 6, preset interval frame is 1 frame, that is, comparison frame image is the adjacent previous frame image of this frame image. Each ringlet central point indicates the linked character point that the characteristic point of this frame image and the characteristic point of comparison frame image match in Fig. 6 Right, all linked character points are to formation linked character point to collection in Fig. 6.Arrow direction indicates direction of displacement angle Θ, displacement side It is -180 degree to angle Θ.It is possible thereby to see, camera lens carries out displacement movement occurs being to the left according to direction of displacement angle Θ Displacement.
Step 1522, except the value of the first element is in default focal range, linked character point pair is determined Integrate the mode that corresponding camera motion mode has moved as the focal length of camera lens.
The mode that the focal length of above-mentioned camera lens does not move can refer to the focal length pair of the camera lens for the mode not moved The numerical value answered does not move;Or the mode that the focal length of above-mentioned camera lens does not move can refer to the corresponding number of the focal length of camera lens Small movements have occurred for value, are considered as the mode that the focal length of camera lens does not move.Here small movements refer to the value of the first element In default focal range.Due to actual cause, small movements have occurred for the corresponding numerical value of the focal length of possible camera lens, in this way may be used To avoid the erroneous judgement for the mode that the focal length of the final camera lens of image has moved, and then improve the standard of determining camera motion mode True property.
Illustratively, default focal range is | α -1 | < 1.0 × 10-10.If the value of the first element is in default focal range Except, it is, | α -1 | > 1.0 × 10-10, then the focal length of camera lens has moved.Therefore illustrate that linked character point corresponds to collection Camera motion mode be camera lens the mode that has moved of focal length, be not required here and consider linked character point to collecting corresponding Camera motion mode whether be camera lens the mode that has moved of displacement.
Wherein, in the case where the value of the first element is greater than all numerical value in default focal range, the focal length hair of camera lens Raw is zoom-up.Zoom-up, which refers to, herein zooms in ZoomIN.That is, the first element α > 1, the focal length of camera lens have been transported Dynamic mode is zoom-up, that is, zooms in ZoomIN.Zoom-up is exemplified below herein, but not limited to this.Referring to Fig. 7 Shown, preset interval frame is 1 frame, that is, comparison frame image is the adjacent previous frame image of this frame image.In Fig. 7 in each ringlet Heart point indicates the linked character point pair that the characteristic point of this frame image and the characteristic point of comparison frame image match.Institute is related in Fig. 7 Join characteristic point to formation linked character point to collection.The mode that the focal length of this Fig. 7 signal camera lens has moved can be zoom-up.
Alternatively, the focal length of camera lens is in the case where the value of the first element is less than all numerical value in default focal range The mode moved is to zoom.It zooms herein and refers to that camera lens pulls back ZoomOUT.The citing zoomed herein is such as Under, but not limited to this.Shown in Figure 8, preset interval frame is 1 frame, that is, comparison frame image is the adjacent previous frame of this frame image Image.Each ringlet central point indicates the association that the characteristic point of this frame image and the characteristic point of comparison frame image match in Fig. 8 Characteristic point pair.The mode that the focal length of this Fig. 8 signal camera lens has moved can be to zoom.
Linked character point can be accurately determined out in this way to the corresponding camera motion mode of collection.That is, the first element α < 1, The focal length of camera lens occurs as zooming.I.e. camera lens pulls back ZoomOUT, and zooming parameter isWherein σ indicates mapping letter It counts, the hyperbolic tangent function in the embodiment of the present invention, or atanh function, it is any to can be realized implementation of the present invention The mapping function of example, belongs to the protection scope of the embodiment of the present invention, no longer illustrates one by one herein.
Step 160, one of camera motion mode camera motion mode is determined as this frame image relative to comparison frame The camera motion mode of image.
Above-mentioned steps 160 can determine this frame image relative to comparison frame image using following at least one implementation Camera motion mode:
In one implementation, in the case where camera motion mode is a camera motion mode, this step 160, This camera motion mode can be determined as camera motion mode of this frame image relative to comparison frame image.
In another implementation, in the case where camera motion mode is more than two camera motion modes, in order to Camera motion mode of this frame image relative to comparison frame image can be obtained, in this step 160, following steps can be used, Determine camera motion mode of this frame image relative to comparison frame image:
There is a kind of the case where camera motion mode for meeting camera motion mode preset condition in camera motion mode Under, a kind of camera motion mode for meeting camera motion mode preset condition is determined as this frame image relative to the comparison frame The camera motion mode of image.
Meeting camera motion mode preset condition herein can show that the displacement of camera lens or the focal length of camera lens have been transported It is dynamic, and being capable of the condition according to set by user setting demand or industrial requirement.It in this way can be pre- by camera motion mode If condition, the camera motion mode for meeting camera motion mode preset condition is found from more than two camera motion modes.
In one case, camera motion mode preset condition may include: frequency of occurrence in determining camera motion mode Greater than the camera motion mode of default frequency of occurrence.Above-mentioned default frequency of occurrence can be configured according to user demand.It is optional , above-mentioned default frequency of occurrence is that linked character point is greater than linked character point to the half of the sum of collection or default frequency of occurrence To the half of the sum of collection.More particularly suitable camera motion mode can be found by camera motion mode in this way, thus convenient Later period mark uses camera motion mode.
Illustratively, linked character point can be 50 to the sum of collection.Wherein, linked character point each to concentration first is special Sign point set is 30 relative to the mode that the displacement of the corresponding camera lens of matched second feature point set has moved, linked character Point is to the side for concentrating focal length of each fisrt feature point set relative to the corresponding camera lens of matched second feature point set to move Formula, and it is 20 that the mode that has moved of the focal length of camera lens, which is zoom-up, default frequency of occurrence can be set to 25.
It can be seen that the frequency of occurrence for the mode that the displacement of camera lens has moved is greater than default frequency of occurrence, then by mirror The mode that the displacement of head has moved is determined as camera motion mode of this frame image relative to comparison frame image.
In other situations, camera motion mode preset condition may include: to occur in determining camera motion mode The highest camera motion mode of number.Most suitable camera motion mode can be found by camera motion mode in this way, it is convenient Later period mark uses camera motion mode.
Illustratively, linked character point can be 50 to the sum of collection.Wherein, linked character point each to concentration first is special Sign point set is 30 relative to the mode that the displacement of the corresponding camera lens of matched second feature point set has moved, linked character Point is to the side for concentrating focal length of each fisrt feature point set relative to the corresponding camera lens of matched second feature point set to move Formula, and it is 20 that the mode that has moved of the focal length of camera lens, which is zoom-up,.
It can be seen that the frequency of occurrence highest for the mode that the displacement of camera lens has moved, then sent out the displacement of camera lens The mode of raw movement is determined as camera motion mode of this frame image relative to comparison frame image.
In the embodiment of the present invention, using from this frame of video data image fisrt feature point set and comparison frame image In the characteristic point that matches of second feature point set can automatically calculate out linked character point as linked character point to collection To the geometric transformation mode of collection, it is finally automatically derived camera motion mode of this frame image relative to comparison frame image.It compares In the prior art, the video clip that rough cut teacher browses up to a hundred hours is not needed, to determine camera motion mode, to reduce The workload of rough cut teacher, improves work efficiency.Also, automatically determine mirror of this frame image relative to comparison frame image Head motion mode, has saved cost of labor and time cost.
On the basis of Fig. 1 to Fig. 4, if linked character point is less to the sum of collection, it may illustrate to compare frame Image and this frame image may not be the image that is mutually related, this frame image being likely to be obtained in this way is relative to comparison frame image Camera motion mode inaccuracy, therefore in order to solve this problem, the embodiment of the present invention also provides a kind of implementation, in step Before rapid 140, the method also includes: judge whether linked character point is greater than preset quantity to collection, if linked character point pair The step of collection is greater than preset quantity, then executes the geometric transformation mode for calculating linked character point to collection.Due to linked character point pair Collection is that the feature point set of this frame image matches relative to the feature point set of comparison frame image, subsequent completion this frame image Relative to comparison frame image camera motion mode determination, therefore linked character point to collect it is more when, illustrate this frame image and Even more like between comparison frame image, this sample frame image can more relative to the determining for camera motion mode of comparison frame image Accurately.
Above-mentioned preset quantity can need to be configured according to user, for example, preset quantity is greater than or equal to 10.It is exemplary , preset quantity can be 20, and preset quantity may be 30, and any this frame image that can be improved is relative to the comparison frame figure The value of the camera motion mode accuracy of picture, belongs to the protection scope of the embodiment of the present invention, herein a different citing.
During the realization of the embodiment of the present invention, when linked character point is to collecting more, illustrate this frame image and comparison frame figure As even more like, this sample frame image can be more accurate relative to the determining for camera motion mode of comparison frame image.
The numerical value and real lens motion mode obtained due to affine transformation matrix is variant, influences 21, video source material Accuracy of the frame image 22 relative to the camera motion mode of comparison frame image 23, therefore in order to solve this problem, referring to figure 9, can be judged using following at least one implementation the value of each element in affine transformation matrix whether meet preset it is legal Condition, before step 151, this method further include:
In one implementation, step 1, to linked character point to concentration, fisrt feature point set in each first block Relative to second feature point set in corresponding second block of the first block can calculate an affine transformation matrix 24, all affine Transformation matrix is referring to as follows in Fig. 7:
Step 2, the whether default lawful condition of the value of each element in the affine transformation matrix being calculated.Further, Judge whether the value of each element in current affine transformation matrix meets default lawful condition, which is to limit respectively The value range of the value of a element;
Wherein, current affine transformation matrix, which refers to, currently participates in whether the value for judging each element meets default lawful condition Affine transformation matrix.
Above-mentioned default lawful condition is needed according to user or industrial requirement is set.Lawful condition is preset referring to as follows Citing, but be not limited to illustrate as follows, it is any more accurately to determine camera lens fortune of this frame image relative to comparison frame image The default lawful condition of flowing mode, belongs to the protection scope of the embodiment of the present invention.
Exemplary, default lawful condition can include but is not limited to: ∈ < 1.0 × 10-10、ε<1.0×10-10And α × β > 0.
Step 3 obtains next affine if the value of each element meets default lawful condition in current affine transformation matrix Transformation matrix continues to execute the step of obtaining the value of the first element in affine transformation matrix as current affine transformation matrix;With And
If the value of each element is unsatisfactory for default lawful condition in current affine transformation matrix, current affine transformation is abandoned Matrix then obtains next affine transformation matrix as current affine transformation matrix, returns to execution and judge current affine transformation matrix In the value of each element the step of whether meeting default lawful condition, until judging all linked character points to concentration, Mei Ge Fisrt feature point set is relative to second feature point set in corresponding second block of the first block in one block.Illustrate to have judged This frame image with comparison frame image in all associated second blocks, and then facilitate this frame image relative to comparison frame image Camera motion mode determination.It affine transformation matrix can be judged one by one in this way, affine transformation can be improved The judgment accuracy of matrix.
In conjunction with Fig. 4, referring to Figure 10, the embodiment of the present invention also provides a kind of implementation, before step 151, the side Method further include:
Step 1501, judge whether the value of each element in affine transformation matrix meets default lawful condition, it is legal to preset Condition is to limit the value range of the value of each element;
Step 1502, if the value of each element meets default lawful condition in affine transformation matrix, 151 is thened follow the steps and is obtained The step of obtaining the value of the first element in affine transformation matrix.
Step 1503, if the value of each element is unsatisfactory for default lawful condition in affine transformation matrix, illustrate affine transformation Matrix is illegal, then abandons affine transformation matrix.All affine transformation matrixs are once judged in this way, and affine transformation can be improved The judging efficiency of matrix.
In conjunction with Fig. 1, referring to Figure 11, the embodiment of the present invention also provides a kind of implementation, after step 160, the side Method further include:
Step 170, the label of mark camera motion mode is noted in this frame image.
Here label can be symbol, can be character, and any label that can identify camera motion mode belongs to The protection scope of the embodiment of the present invention, herein a different citing.It may be implemented automatically to mark this frame image progress camera lens in this way Motion mode eliminates a large amount of artificial mark work, facilitates camera motion required for later period editing personnel's quick-searching Mode substantially shortens the post-production duration, to reduce cost of manufacture.
During the embodiment of the present invention is realized, the image in video source material is labeled, reduces or substitutes primary and cut The work of volume teacher, has saved manual time, has accelerated video program production progress, reduce primary editor's bring personnel at This.
In conjunction with Fig. 1, referring to Figure 12, the specific implementation flow of the embodiment of the present invention is carried out as follows for example:
Firstly, prior to step 110, the method can be with further include:
Step 100, video source material is obtained, video source material includes: all images, the frame sum frames_ of image The wide w and high h of len, the size SIZE=(w, h) of image, i.e. the size SIZE of image a namely frame image.
Step 101, obtain the counting cnt of counter, preset interval frame jump_frames and it is default draw a patch parameters (wb, Hb), wherein the counting cnt=0 of counter is as initial value, and wb is laterally to divide number, and hb is longitudinally divided number;Its In, counter counts the frame sum frames_len that cnt sum is image.
Step 102, cnt frame image is obtained from video source material as this frame image;
Step 103, (cnt+jump_frames) frame image frame image as a comparison is obtained from video source material.This Sample can first obtain this frame image 21 and comparison frame image, then execute step 110 again, both can be to this frame image and comparison frame figure As being handled.
Step 104, the numpy matrix of this frame image is obtainedAnd obtain the numpy of comparison frame image MatrixIt then proceedes to execute step 110 to step 160, determines this frame image relative to comparison frame image Camera motion mode, whereinIndicate vector space, 3 indicate Color Channel number.
Step 105, judge that cnt+3*jump_frames is less than, or equal to the frame sum frames_len of image;
Step 106, if cnt+3*jump_frames is less than frame the sum frames_len, cnt=cnt+2* of image Jump_frames, return continue to execute step 102;
Step 107, if cnt+3*jump_frames is equal to the frame sum frames_len of image, video source is exported The camera motion mode of all images in material.
In the embodiment of the present invention, in the case where video data is video source material, cnt is obtained from video source material Frame image is as this frame image;And obtain (cnt+jump_frames) frame image frame image as a comparison.Using from video The characteristic point that the second feature point set in fisrt feature point set and comparison frame image in this frame image of data matches, makees It is linked character point to collection, linked character point can be automatically calculated out to the geometric transformation mode of collection, be finally automatically derived Camera motion mode of this frame image relative to comparison frame image.Compared to the prior art, it does not need in rough cut teacher's browsing Hundred hours video clips, to reduce the workload of rough cut teacher, improve work effect to determine camera motion mode Rate.Also, automatically determine this frame image relative to comparison frame image camera motion mode, saved cost of labor and time at This.
It continues with and the determining device of camera motion mode provided in an embodiment of the present invention is introduced.
Shown in 13, Figure 13 is the first structure signal of the determining device of the camera motion mode of the embodiment of the present invention Figure.The determining device of offer camera motion mode of the embodiment of the present invention, comprising:
First obtains module 31, for determining fisrt feature point set from this frame of video data image;
Second obtains module 32, described for determining second feature point set from the comparison frame image of the video data Comparison frame image and described frame image belong to the image of different frame;
Matching module 33, the characteristic point for the fisrt feature point set and the second feature point set to match, makees It is linked character point to collection;
Computing module 34, for calculating the linked character point to the geometric transformation mode of collection;
Third obtains module 35, for the geometric transformation mode to be determined as the linked character point to the corresponding mirror of collection Head motion mode;
4th obtains module 36, for one of camera motion mode camera motion mode to be determined as described Camera motion mode of the frame image relative to the comparison frame image.
In the embodiment of the present invention, using from this frame of video data image fisrt feature point set and comparison frame image In the characteristic point that matches of second feature point set can automatically calculate out linked character point as linked character point to collection To the geometric transformation mode of collection, it is finally automatically derived camera motion mode of this frame image relative to comparison frame image.It compares In the prior art, the video clip that rough cut teacher browses up to a hundred hours is not needed, to determine camera motion mode, to reduce The workload of rough cut teacher, improves work efficiency.Also, automatically determine mirror of this frame image relative to comparison frame image Head motion mode, has saved cost of labor and time cost.
In one possible implementation, described first module is obtained, is specifically used for:
Uniform block division is carried out to described frame image, obtains more than two first blocks;
Feature extraction is carried out to first block, obtains the feature point set in first block;
Using the feature point set in first block as fisrt feature point set;
Described second obtains module, is specifically used for:
Uniform block division is carried out to the comparison frame image, obtains corresponding second block of each first block;
Feature extraction is carried out to corresponding second block of each first block, obtains corresponding second block of each first block Feature point set;
Using the feature point set of corresponding second block of each first block as second feature point set;
The matching module, is specifically used for:
By second feature point set in the second block corresponding with first block of fisrt feature point set in each first block It is matched, obtains second feature point in the second block corresponding with first block of fisrt feature point set in each first block Collect the characteristic point to match;
By second feature point set in the second block corresponding with first block of fisrt feature point set in each first block The characteristic point to match, as linked character point to collection.
In one possible implementation, the computing module, is specifically used for:
The linked character point is calculated to concentration, fisrt feature point set is relative to described in each first block The affine transformation matrix of second feature point set in corresponding second block of one block;
The third obtains module, is specifically used for:
The value of the first element in the affine transformation matrix is obtained, the value of first element is used to characterize the focal length of camera lens Or whether displacement moves;
Except the value of first element is in default focal range, determine the linked character point to collection Corresponding camera motion mode is the mode that the focal length of camera lens has moved;
In the case where the value of first element is greater than all numerical value in the default focal range, the camera lens The mode that focal length has moved is zoom-up;
In the case where the value of first element is less than all numerical value in the default focal range, the camera lens The mode that focal length has moved is to zoom.
In one possible implementation, described device further include: the 5th obtains module, and the described 5th, which obtains module, uses In in the case where the value of first element is in default focal range, determine the linked character point to the corresponding mirror of collection Head motion mode is the mode that the displacement of camera lens has moved.
In one possible implementation, described device further include: first judgment module, the first judgment module, Before the value of the first element in the acquisition affine transformation matrix, each member in the affine transformation matrix is judged Whether the value of element meets default lawful condition, and the default lawful condition is the value range for limiting the value of each element;
If the value of each element meets default lawful condition in the affine transformation matrix, executes and obtain the affine change The step of changing the value of the first element in matrix.
In one possible implementation, the camera motion mode preset condition comprises determining that the camera motion Frequency of occurrence is greater than the camera motion mode of default frequency of occurrence in mode.
In one possible implementation, the camera motion mode preset condition comprises determining that the camera motion The highest camera motion mode of frequency of occurrence in mode.
In one possible implementation, described device further include: the second judgment module, second judgment module, For it is described calculate the linked character point to the geometric transformation mode of collection before, judge the linked character point to collection whether Greater than preset quantity;
If the linked character point is greater than preset quantity to collection, executes and calculate the linked character point to the geometry of collection The step of mapping mode.
In one possible implementation, the case where the camera motion mode is more than two camera motion modes Under, the described 4th obtains module, it is specifically used for:
There is a kind of camera motion mode for meeting camera motion mode preset condition in the camera motion mode In the case of, it is opposite that a kind of camera motion mode for meeting the camera motion mode preset condition is determined as described frame image In the camera motion mode of the comparison frame image.
It is the second structural schematic diagram of the determining device of the camera motion mode of the embodiment of the present invention referring to Figure 14, Figure 14. The determining device of offer camera motion mode of the embodiment of the present invention further include: labeling module 37, labeling module 37, for described One of camera motion mode camera motion mode is determined as described frame image relative to the comparison frame image Camera motion mode after,
It is noted for identifying the label of the camera motion mode in described frame image.
During the embodiment of the present invention is realized, the image in video source material is labeled, reduces or substitutes primary and cut The work of volume teacher, has saved manual time, has accelerated video program production progress, reduce primary editor's bring personnel at This.
It is the structural schematic diagram of the electronic equipment of the embodiment of the present invention referring to Figure 15, Figure 15.The embodiment of the present invention also provides A kind of electronic equipment, including processor 41, communication interface 42, memory 43 and communication bus 44, wherein processor 41 leads to Believe that interface 42, memory 43 complete mutual communication by communication bus 44,
Memory 43, for storing computer program;
Processor 41 when for executing the program stored on memory 43, realizes following steps:
Fisrt feature point set is determined from this frame image of video data;
Fisrt feature point set is determined from this frame image of video data;
Second feature point set, the comparison frame image and described frame are determined from the comparison frame image of the video data Image belongs to the image of different frame;
The characteristic point that the fisrt feature point set and the second feature point set are matched, as linked character point pair Collection;
The linked character point is calculated to the geometric transformation mode of collection;
The geometric transformation mode is determined as the linked character point to the corresponding camera motion mode of collection;By the mirror One of head motion mode camera motion mode is determined as camera lens fortune of the described frame image relative to the comparison frame image Flowing mode.
The communication bus that above-mentioned electronic equipment is mentioned can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or expanding the industrial standard structure (Extended Industry Standard Architecture, EISA) bus etc..The communication bus can be divided into address bus, data/address bus, control bus etc..For just It is only indicated with a thick line in expression, figure, it is not intended that an only bus or a type of bus.
Communication interface is for the communication between above-mentioned electronic equipment and other equipment.
Memory may include random access memory (Random Access Memory, RAM), also may include non-easy The property lost memory (Non-Volatile Memory, NVM), for example, at least a magnetic disk storage.Optionally, memory may be used also To be storage device that at least one is located remotely from aforementioned processor.
Above-mentioned processor can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP) etc.;It can also be digital signal processor (Digital Signal Processing, DSP), it is specific integrated circuit (Application Specific Integrated Circuit, ASIC), existing It is field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete Door or transistor logic, discrete hardware components.
Method provided in an embodiment of the present invention can be applied to electronic equipment.Further, which can be with are as follows: platform Formula computer, portable computer, intelligent mobile terminal, server etc..Be not limited thereto, it is any may be implemented it is of the invention Electronic equipment all belongs to the scope of protection of the present invention.
In another embodiment provided by the invention, a kind of computer readable storage medium is additionally provided, which can It reads to be stored with instruction in storage medium, when run on a computer, so that computer executes any institute in above-described embodiment The determination method for the camera motion mode stated.
In another embodiment provided by the invention, a kind of computer program product comprising instruction is additionally provided, when it When running on computers, so that computer executes the determination method of any camera motion mode in above-described embodiment.
In another embodiment provided by the invention, additionally provide a kind of the embodiment of the present application provide it is a kind of using journey Sequence, when run on a computer, so that computer executes the determination method of any of the above-described camera motion mode.It changes Sentence is talked about, and the corresponding code of the determination method of the camera motion mode of the embodiment of the present invention can be carried out SDK (Software Development Kit, Software Development Kit) it encapsulates, the later period realizes the service of the rough cut teacher in serviceization team.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.The computer program Product includes one or more computer instruction.When loading on computers and executing the computer program instructions, entirely Portion is partly generated according to process or function described in the embodiment of the present invention.The computer can be general purpose computer, specially With computer, computer network or other programmable devices.The computer instruction can store in computer-readable storage In medium, or from a computer readable storage medium to the transmission of another computer readable storage medium, for example, the meter Calculation machine instruction can from a web-site, computer, server or data center by it is wired (such as coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another web-site, computer, service Device or data center are transmitted.The computer readable storage medium can be any usable medium that computer can access The data storage devices such as server, the data center either integrated comprising one or more usable medium.It is described to use Medium can be magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as Solid state hard disk Solid State Disk (SSD)) etc..
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device/ For electronic equipment/computer readable storage medium/computer program product/application program embodiment comprising instruction, due to It is substantially similar to embodiment of the method, so being described relatively simple, related place is referring to the part explanation of embodiment of the method It can.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention It is interior.

Claims (19)

1. a kind of determination method of camera motion mode, which is characterized in that the described method includes:
Fisrt feature point set is determined from this frame image of video data;
Second feature point set, the comparison frame image and described frame image are determined from the comparison frame image of the video data Belong to the image of different frame;
The characteristic point that the fisrt feature point set and the second feature point set are matched, as linked character point to collection;
The linked character point is calculated to the geometric transformation mode of collection;
The geometric transformation mode is determined as the linked character point to the corresponding camera motion mode of collection;
One of camera motion mode camera motion mode is determined as described frame image relative to the comparison frame The camera motion mode of image.
2. the method as described in claim 1, which is characterized in that determine fisrt feature in this frame image from video data Point set, comprising:
Uniform block division is carried out to described frame image, obtains more than two first blocks;
Feature extraction is carried out to first block, obtains the feature point set in first block;
Using the feature point set in first block as fisrt feature point set;
Second feature point set is determined in the comparison frame image from the video data, comprising:
Uniform block division is carried out to the comparison frame image, obtains corresponding second block of each first block;
Feature extraction is carried out to corresponding second block of each first block, obtains the feature of corresponding second block of each first block Point set;
Using the feature point set of corresponding second block of each first block as second feature point set;
The characteristic point that the fisrt feature point set and the second feature point set are matched, as linked character point pair Collection, comprising:
Second feature point set in the second block corresponding with first block of fisrt feature point set in each first block is carried out Matching, obtains second feature point set phase in the second block corresponding with first block of fisrt feature point set in each first block Matched characteristic point;
By second feature point set phase in the second block corresponding with first block of fisrt feature point set in each first block The characteristic point matched, as linked character point to collection.
3. method according to claim 2, which is characterized in that described to calculate the linked character point to the geometric transformation side of collection Formula, comprising:
The linked character point is calculated to concentration, fisrt feature point set is relative to firstth area in each first block The affine transformation matrix of second feature point set in corresponding second block of block;
It is described that the geometric transformation mode is determined as the linked character point to the corresponding camera motion mode of collection, comprising:
Obtain the value of the first element in the affine transformation matrix, the value of first element be used to characterize camera lens focal length or Whether displacement moves;
Except the value of first element is in default focal range, determine that the linked character point corresponds to collection Camera motion mode be camera lens the mode that has moved of focal length;
In the case where the value of first element is greater than all numerical value in the default focal range, the focal length of the camera lens The mode moved is zoom-up;
In the case where the value of first element is less than all numerical value in the default focal range, the focal length of the camera lens The mode moved is to zoom.
4. method as claimed in claim 3, which is characterized in that the method also includes:
In the case where the value of first element is in default focal range, determine that the linked character point is corresponding to collection Camera motion mode is the mode that the displacement of camera lens has moved.
5. the method as claimed in claim 3 or 4, which is characterized in that obtain in the affine transformation matrix first yuan described Before the value of element, the method also includes:
Judge whether the value of each element in the affine transformation matrix meets default lawful condition, the default lawful condition is Limit the value range of the value of each element;
If the value of each element meets default lawful condition in the affine transformation matrix, executes and obtain the affine transformation square In battle array the step of the value of the first element.
6. such as the described in any item methods of Claims 1-4, which is characterized in that calculate the linked character point to collection described Geometric transformation mode before, the method also includes:
Judge whether the linked character point is greater than preset quantity to collection;
If the linked character point is greater than preset quantity to collection, the geometric transformation for calculating the linked character point to collection is executed The step of mode.
7. such as the described in any item methods of Claims 1-4, which is characterized in that in the camera motion mode be two or more It is described that one of camera motion mode camera motion mode is determined as described frame in the case where camera motion mode Camera motion mode of the image relative to the comparison frame image, comprising:
There is a kind of the case where camera motion mode for meeting camera motion mode preset condition in the camera motion mode Under, a kind of camera motion mode for meeting the camera motion mode preset condition is determined as described frame image relative to institute State the camera motion mode of comparison frame image.
8. the method for claim 7, which is characterized in that the camera motion mode preset condition comprises determining that described Frequency of occurrence is greater than the camera motion mode of default frequency of occurrence in camera motion mode.
9. the method for claim 7, which is characterized in that the camera motion mode preset condition comprises determining that described The highest camera motion mode of frequency of occurrence in camera motion mode.
10. such as the described in any item methods of Claims 1-4, which is characterized in that it is described will be in the camera motion mode A kind of camera motion mode is determined as after camera motion mode of the described frame image relative to the comparison frame image, described Method further include:
It is noted for identifying the label of the camera motion mode in described frame image.
11. a kind of determining device of camera motion mode, which is characterized in that described device includes:
First obtains module, for determining fisrt feature point set from this frame of video data image;
Second obtains module, for determining second feature point set, the comparison frame from the comparison frame image of the video data Image and described frame image belong to the image of different frame;
Matching module, the characteristic point for the fisrt feature point set and the second feature point set to match, as association Characteristic point is to collection;
Computing module, for calculating the linked character point to the geometric transformation mode of collection;
Third obtains module, for the geometric transformation mode to be determined as the linked character point to the corresponding camera motion of collection Mode;
4th obtains module, for one of camera motion mode camera motion mode to be determined as described frame image Camera motion mode relative to the comparison frame image.
12. device as claimed in claim 11, which is characterized in that described first obtains module, is specifically used for:
Uniform block division is carried out to described frame image, obtains more than two first blocks;
Feature extraction is carried out to first block, obtains the feature point set in first block;
Using the feature point set in first block as fisrt feature point set;
Described second obtains module, is specifically used for:
Uniform block division is carried out to the comparison frame image, obtains corresponding second block of each first block;
Feature extraction is carried out to corresponding second block of each first block, obtains the feature of corresponding second block of each first block Point set;
Using the feature point set of corresponding second block of each first block as second feature point set;
The matching module, is specifically used for:
Second feature point set in the second block corresponding with first block of fisrt feature point set in each first block is carried out Matching, obtains second feature point set phase in the second block corresponding with first block of fisrt feature point set in each first block Matched characteristic point;
By second feature point set phase in the second block corresponding with first block of fisrt feature point set in each first block The characteristic point matched, as linked character point to collection.
13. device as claimed in claim 12, which is characterized in that the computing module is specifically used for:
The linked character point is calculated to concentration, fisrt feature point set is relative to firstth area in each first block The affine transformation matrix of second feature point set in corresponding second block of block;
The third obtains module, is specifically used for:
Obtain the value of the first element in the affine transformation matrix, the value of first element be used to characterize camera lens focal length or Whether displacement moves;
Except the value of first element is in default focal range, determine that the linked character point corresponds to collection Camera motion mode be camera lens the mode that has moved of focal length;
In the case where the value of first element is greater than all numerical value in the default focal range, the focal length of the camera lens The mode moved is zoom-up;
In the case where the value of first element is less than all numerical value in the default focal range, the focal length of the camera lens The mode moved is to zoom.
14. device as claimed in claim 13, which is characterized in that described device further include: the 5th acquisition module, the described 5th Module is obtained to be used to determine the linked character point pair in the case where the value of first element is in default focal range Integrate the mode that corresponding camera motion mode has moved as the displacement of camera lens.
15. device according to claim 13 or 14, which is characterized in that described device further include: first judgment module, it is described First judgment module judges the affine change before the value of the first element in the acquisition affine transformation matrix Whether the value for changing each element in matrix meets default lawful condition, and the default lawful condition is the value for limiting each element Value range;
If the value of each element meets default lawful condition in the affine transformation matrix, executes and obtain the affine transformation square In battle array the step of the value of the first element.
16. such as the described in any item devices of claim 11 to 14, which is characterized in that described device further include: second judges mould Block, second judgment module, for judging institute before the calculating linked character point is to the geometric transformation mode of collection It states linked character point and whether preset quantity is greater than to collection;
If the linked character point is greater than preset quantity to collection, the geometric transformation for calculating the linked character point to collection is executed The step of mode.
17. such as the described in any item devices of claim 11 to 14, which is characterized in that the camera motion mode be two with In the case where upper camera motion mode, the described 4th obtains module, is specifically used for:
There is a kind of the case where camera motion mode for meeting camera motion mode preset condition in the camera motion mode Under, a kind of camera motion mode for meeting the camera motion mode preset condition is determined as described frame image relative to institute State the camera motion mode of comparison frame image.
18. such as the described in any item devices of claim 11 to 14, which is characterized in that described device further include: labeling module, institute Labeling module is stated, for one of camera motion mode camera motion mode to be determined as described frame image described After camera motion mode relative to the comparison frame image,
It is noted for identifying the label of the camera motion mode in described frame image.
19. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing Device, communication interface, memory complete mutual communication by communication bus;
Memory, for storing computer program;
Processor when for executing the program stored on memory, realizes any method and step of claim 1-10.
CN201811327207.4A 2018-11-08 2018-11-08 Method and device for determining lens movement mode Active CN109409321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811327207.4A CN109409321B (en) 2018-11-08 2018-11-08 Method and device for determining lens movement mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811327207.4A CN109409321B (en) 2018-11-08 2018-11-08 Method and device for determining lens movement mode

Publications (2)

Publication Number Publication Date
CN109409321A true CN109409321A (en) 2019-03-01
CN109409321B CN109409321B (en) 2021-02-05

Family

ID=65472627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811327207.4A Active CN109409321B (en) 2018-11-08 2018-11-08 Method and device for determining lens movement mode

Country Status (1)

Country Link
CN (1) CN109409321B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110855893A (en) * 2019-11-28 2020-02-28 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111629230A (en) * 2020-05-29 2020-09-04 北京市商汤科技开发有限公司 Video processing method, script generating method, device, computer equipment and storage medium
WO2022062554A1 (en) * 2020-09-27 2022-03-31 华为技术有限公司 Multi-lens video recording method and related device
CN114500851A (en) * 2022-02-23 2022-05-13 广州博冠信息科技有限公司 Video recording method and device, storage medium and electronic equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1229931A (en) * 1998-03-25 1999-09-29 全友电脑股份有限公司 Automatic focusing method
CN1687700A (en) * 2005-04-15 2005-10-26 东南大学 Method for testing storage quantity of coal through stereoscopic vision
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Method and device for automatically detecting moving target under complex background
CN101453557A (en) * 2008-12-30 2009-06-10 浙江大学 Quick global motion estimation method based on motion vector cancellation and differential principle
CN101840578A (en) * 2009-03-17 2010-09-22 鸿富锦精密工业(深圳)有限公司 Camera device and dynamic detection method thereof
CN102509309A (en) * 2011-11-04 2012-06-20 大连海事大学 Image-matching-based object-point positioning system
CN103327245A (en) * 2013-06-07 2013-09-25 电子科技大学 Automatic focusing method of infrared imaging system
CN105654502A (en) * 2016-03-30 2016-06-08 广州市盛光微电子有限公司 Panorama camera calibration device and method based on multiple lenses and multiple sensors
CN106980880A (en) * 2017-03-06 2017-07-25 北京小米移动软件有限公司 The method and device of images match
CN107122746A (en) * 2017-04-28 2017-09-01 清华大学 Video analysis equipment, method and computer-readable recording medium
CN108377342A (en) * 2018-05-22 2018-08-07 Oppo广东移动通信有限公司 double-camera photographing method, device, storage medium and terminal

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1229931A (en) * 1998-03-25 1999-09-29 全友电脑股份有限公司 Automatic focusing method
CN1687700A (en) * 2005-04-15 2005-10-26 东南大学 Method for testing storage quantity of coal through stereoscopic vision
CN101022505A (en) * 2007-03-23 2007-08-22 中国科学院光电技术研究所 Method and device for automatically detecting moving target under complex background
CN101453557A (en) * 2008-12-30 2009-06-10 浙江大学 Quick global motion estimation method based on motion vector cancellation and differential principle
CN101840578A (en) * 2009-03-17 2010-09-22 鸿富锦精密工业(深圳)有限公司 Camera device and dynamic detection method thereof
CN102509309A (en) * 2011-11-04 2012-06-20 大连海事大学 Image-matching-based object-point positioning system
CN103327245A (en) * 2013-06-07 2013-09-25 电子科技大学 Automatic focusing method of infrared imaging system
CN105654502A (en) * 2016-03-30 2016-06-08 广州市盛光微电子有限公司 Panorama camera calibration device and method based on multiple lenses and multiple sensors
CN106980880A (en) * 2017-03-06 2017-07-25 北京小米移动软件有限公司 The method and device of images match
CN107122746A (en) * 2017-04-28 2017-09-01 清华大学 Video analysis equipment, method and computer-readable recording medium
CN108377342A (en) * 2018-05-22 2018-08-07 Oppo广东移动通信有限公司 double-camera photographing method, device, storage medium and terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110855893A (en) * 2019-11-28 2020-02-28 维沃移动通信有限公司 Video shooting method and electronic equipment
CN111629230A (en) * 2020-05-29 2020-09-04 北京市商汤科技开发有限公司 Video processing method, script generating method, device, computer equipment and storage medium
WO2022062554A1 (en) * 2020-09-27 2022-03-31 华为技术有限公司 Multi-lens video recording method and related device
CN114500851A (en) * 2022-02-23 2022-05-13 广州博冠信息科技有限公司 Video recording method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN109409321B (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN109409321A (en) A kind of determination method and device of camera motion mode
CN108733780B (en) Picture searching method and device
US8879894B2 (en) Pixel analysis and frame alignment for background frames
Wu et al. Graph2Net: Perceptually-enriched graph learning for skeleton-based action recognition
KR20160100324A (en) Systems, methods, and apparatus for image retrieval
US7936915B2 (en) Focal length estimation for panoramic stitching
CN111444366B (en) Image classification method, device, storage medium and electronic equipment
US20160292900A1 (en) Image group processing and visualization
US20120127276A1 (en) Image retrieval system and method and computer product thereof
CN108924586A (en) A kind of detection method of video frame, device and electronic equipment
CN110278382A (en) A kind of focus method, device, electronic equipment and storage medium
CN113779303B (en) Video set indexing method and device, storage medium and electronic equipment
CN105677731B (en) Show method, apparatus, terminal and the server of preview picture figure
CN109034183A (en) A kind of object detection method, device and equipment
CN113190757A (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
CN113763415B (en) Target tracking method, device, electronic equipment and storage medium
CN107516105A (en) Image processing method and device
CN105138699A (en) Photograph classification method and device based on shooting angle and mobile terminal
CN105812853A (en) Image processing method and electronic device
US20220405320A1 (en) System for creating an audio-visual recording of an event
CN107396112A (en) A kind of coding method and device, computer installation, readable storage medium storing program for executing
US20150262000A1 (en) Method and apparatus for summarization based on facial expressions
CN111783586A (en) Food material management method and device, computing equipment and storage medium
US10924637B2 (en) Playback method, playback device and computer-readable storage medium
CN108200477A (en) The method, apparatus and equipment that video file is generated and played

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant