CN103327356A - Video matching method and device - Google Patents

Video matching method and device Download PDF

Info

Publication number
CN103327356A
CN103327356A CN2013102686641A CN201310268664A CN103327356A CN 103327356 A CN103327356 A CN 103327356A CN 2013102686641 A CN2013102686641 A CN 2013102686641A CN 201310268664 A CN201310268664 A CN 201310268664A CN 103327356 A CN103327356 A CN 103327356A
Authority
CN
China
Prior art keywords
frame
video
characteristic information
key
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102686641A
Other languages
Chinese (zh)
Other versions
CN103327356B (en
Inventor
孙茂杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201310268664.1A priority Critical patent/CN103327356B/en
Publication of CN103327356A publication Critical patent/CN103327356A/en
Application granted granted Critical
Publication of CN103327356B publication Critical patent/CN103327356B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the technical field of video and provides a video matching method and device. The method comprises the steps that all key frames of target video are extracted, feature information of each target key frame is stored; the feature information of the target key frames is compared with feature information of source video key frames generated in advance, and the matching degree of the target video and source video is obtained. The teaching source video is analyzed in advance, when a learner carries out learning, application software only needs to analyze the target video of the learner, the situation that two-way video is analyzed at the same time is avoided, and half of CPU operation consumption is lowered.

Description

A kind of video matching method, device
Technical field
The invention belongs to the video technique field, relate in particular to a kind of video matching method, device.
Background technology
At present, the video study application software of research staff's exploitation, such as golf study application software, generally be to play golf video teaching sheet, catch simultaneously the video of learning, imitating the learner who moves in the instruction film by camera, adopt the frame by frame action of the video frame image of contrast teaching sheet and learner, come guidance learning person.But usually this direct two width of cloth video requency frame datas contrast, can spend a large amount of central processing unit (Central Processing Unit, CPU) resource, at television set (television, TV) etc. on the elementary CPU that electronic equipment for consumption uses, the speed of contrast can be slow, causes TV or other electronic equipment for consumption response speeds slower.
Summary of the invention
The embodiment of the invention provides a kind of video matching method, device, is intended to solve prior art and can spends the problem that a large amount of CPU computings consume when the action of contrast teaching sheet and learner's video frame image frame by frame.
On the one hand, provide a kind of video matching method, described method comprises:
Extract all key frames of target video, and preserve the characteristic information of each Target key frames;
The characteristic information of described Target key frames and the characteristic information of the source key frame of video that generates are in advance compared, obtain the matching degree of target video and source video.
Further, the key frame of described extraction target video specifically comprises:
Read the current video frame of target video, and with described current video frame as the first key frame;
Obtain the characteristic information of described the first key frame;
Read the next frame frame of video;
Obtain the characteristic information of described next frame frame of video;
Characteristic information in the two frame frame of video is compared, draw matching degree;
If matching degree is lower than default matching threshold, described next frame of video that then will read is as the second key frame, otherwise continues to read the next frame frame of video, until all frame of video in the target video read complete.
Further, the described characteristic information that obtains frame of video comprises:
One frame key frame is carried out background erase process, stay manikin;
Horizontal vertical scans described manikin, draws the rectangular area at human body place;
With described rectangular area from top to bottom the n five equilibrium do contour;
Obtain every isocontour characteristic information, with the characteristic information of every isocontour characteristic information as this key frame.
Further, the described characteristic information that obtains frame of video comprises:
One frame key frame is carried out background erase process, stay manikin;
Utilize the geometrical relationship of geodesic distance between each summit of manikin to identify 5 characteristic points that human body is positioned at four limbs and the crown;
Generate 5 bone center lines according to described 5 characteristic points;
Determine the position of artis according to described 5 bone center lines, with the position of each artis characteristic information as this key frame.
On the other hand, provide a kind of video matching device, described device comprises:
Characteristic acquisition unit is used for extracting all key frames of target video, and preserves the characteristic information of each Target key frames;
The matching degree acquiring unit is used for the characteristic information of described Target key frames and the characteristic information of the source key frame of video that generates are in advance compared, and obtains the matching degree of target video and source video.
Further, described feature information extraction unit comprises:
The frame of video read module is used for reading current video frame and the next frame frame of video of target video, and with described current video frame as the first key frame;
The characteristic information acquisition module is for the characteristic information that obtains described the first key frame and described next frame frame of video;
The matching degree acquisition module, be used for the characteristic information of two frame frame of video is compared, draw matching degree, if matching degree is lower than default matching threshold, described next frame of video that then will read is as the second key frame, otherwise continue to read the next frame frame of video, until all frame of video in the target video read complete.
Further, described characteristic information acquisition module comprises:
The first background erase submodule is used for that a frame key frame is carried out background erase and processes, and stays manikin;
The rectangular area obtains submodule, is used for horizontal vertical and scans described manikin, draws the rectangular area at human body place;
Contour division submodule, be used for described rectangular area from top to bottom the n five equilibrium do contour;
Contour information obtains submodule, is used for obtaining every isocontour characteristic information, with the characteristic information of every isocontour characteristic information as this key frame.
Further, described characteristic information acquisition module comprises:
The second background erase submodule is used for that a frame key frame is carried out background erase and processes, and stays manikin;
Characteristic point is obtained submodule, is used for utilizing the geometrical relationship of geodesic distance between each summit of manikin to identify 5 characteristic points that human body is positioned at four limbs and the crown;
The bone center line generates submodule, is used for generating 5 bone center lines according to described 5 characteristic points;
Artis position acquisition submodule is used for determining according to described 5 bone center lines the position of artis, with the position of each artis characteristic information as this key frame.
In the embodiment of the invention, adopt in advance teaching source video is analyzed, the learner is in study, and application software only needs analytic learning person's target video has avoided analyzing simultaneously the situation of two-path video, and the CPU computing that alleviates half consumes.In addition, the coupling between target video and the source video is not that each pixel in the frame of video is mated, and just mates for the characteristic information in two frame of video, and matching speed is faster.Also have, to the source video, according to the speed of the human motion of source video, frame period i is got in setting, for example, for the Yoga video, the speed of motion is relatively slow, then the i value can be got a relatively large value, 1 second even longer time, get a frame frame of video and carry out feature information extraction, rather than each frame is all extracted, in certain accuracy rating, the operation time of feature extraction can be saved at double like this.Also having, is not that every pair of frame of video extracting is compared, and human figure changes maximum frame of video as key frame but extract wherein, only key frame is compared, and in certain accuracy rating, also can save at double the operation time of feature extraction.
Description of drawings
Fig. 1 is the realization flow figure of the video matching method that provides of the embodiment of the invention one;
Fig. 2 is the realization flow figure of all key frames of the extraction target video that provides of the embodiment of the invention one;
Fig. 3 is the realization flow figure of extracting method of the characteristic information of a kind of key frame of providing of the embodiment of the invention one;
Fig. 4 is the realization flow figure of extracting method of the characteristic information of the another kind of key frame that provides of the embodiment of the invention one;
Fig. 5 be the embodiment of the invention one provide a frame frame of video is carried out after background erase processes the manikin schematic diagram that stays;
Fig. 6 is the rectangular area schematic diagram at the human body place that provides of the embodiment of the invention one;
Fig. 7 is the contour schematic diagram that the rectangular area at the human body place that provides of the embodiment of the invention one is done by five equilibrium from top to bottom;
Fig. 8 is the structured flowchart of the video matching device that provides of the embodiment of the invention two.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, is not intended to limit the present invention.
In embodiments of the present invention, extract first the key frame of target video, and record the characteristic information of each Target key frames; Again the characteristic information of Target key frames and the characteristic information of the source key frame of video that generates are in advance compared, to draw the match condition between target video frame and the source frame of video.
Below in conjunction with specific embodiment realization of the present invention is described in detail:
Embodiment one
Fig. 1 shows the realization flow of the video matching method that the embodiment of the invention one provides, and details are as follows:
In step S101, extract all key frames of target video, and preserve the characteristic information of each Target key frames.
Wherein, target video is the video of learning, imitate the learner who moves in the instruction film, and the flow process of all key frames of extraction target video specifically comprises as shown in Figure 2:
Step S201, read the current video frame of target video, and with described current video frame as the first key frame;
Step S202, obtain the characteristic information of described the first key frame;
Step S203, read the next frame frame of video;
Step S204, obtain the characteristic information of described next frame frame of video;
Step S205, the characteristic information in the two frame frame of video is compared, draw matching degree;
If step S206 matching degree is lower than default matching threshold, described next frame of video that then will read is as the second key frame, otherwise continues to read the next frame frame of video, until all frame of video in the target video read complete.
In the present embodiment, with the first frame frame of video of occurring in the target video as key frame, then read the next frame frame of video, characteristic information in two frames is compared, draw matching degree (if the characteristic information of two frames is identical, then matching degree is 1), if matching degree is lower than default matching threshold, think that then larger variation has occured the human figure in two frames, next frame of video that then will read is as key frame, otherwise continue to read the next frame frame of video, until all frame of video in the target video read complete.Need to prove: video each second 30 frames, do not need reading one by one, can set one second and read a frame, the next frame here refers to the frame of video of next second, can certainly set two seconds and read a frame, do not limit at this, this method also can be saved the operation time of feature extraction at double in certain accuracy rating.
Wherein, the extraction of the characteristic information of key frame can realize by following steps, idiographic flow as shown in Figure 3:
Step S211, a frame key frame is carried out background erase process, stay manikin, as shown in Figure 4.
Step S212, horizontal vertical scanning draw the rectangular area at human body place, and calculate the ratio of width to height relation of described rectangular area.
The rectangular area at human body place also is referred to as area-of-interest, as shown in Figure 5, record the wide high proportion relation of area-of-interest, after when comparing with the source frame of video, if the wide high proportion of the area-of-interest of source to be compared frame of video relation meets default error range, then proceed contrast, otherwise assertive goal frame of video and source frame of video are not mated.Wherein, the source frame of video is the frame of video of instruction film.
Step S213, with area-of-interest from top to bottom the n five equilibrium do contour.
Step S214, obtain every isocontour characteristic information, with the characteristic information of every isocontour characteristic information as this key frame.
As shown in Figure 6, n=8.Each contour can be truncated into black (background) white (human body) alternate line segment, record the proportionate relationship of the line segment that every contour is truncated, (0 represents background colour deceives the color of attention mark article one line segment, 1 to represent human body white), the isocontour information the highest such as figure below may be (0,0.5,0.2,0.3), can obtain thus the characteristic information of each key frame.Wherein, the characteristic information of key frame also comprises the quantity of key frame, the time that each key frame occurs except comprising the isocontour information of each key frame.
In addition, the characteristic information of key frame also can obtain as shown in Figure 7 by following step, comprising:
Step S221, a frame key frame is carried out background erase process, stay manikin, as shown in Figure 4.
Step S222, utilize the geometrical relationship of geodesic distance between each summit of manikin to identify 5 characteristic points that human body is positioned at four limbs and the crown.
In the present embodiment, take up an official post at manikin and to get a bit, find the solution the vertex v with this geodesic distance maximum, vertex v as one of terminal characteristic point, is added into terminal characteristic point set V.Then search and the point of gathering the geodesic distance sum maximum of each point among the V in manikin, as new terminal characteristic point, until new terminal characteristic point and the terminal characteristic point geodesic distance of set among the V be less than default threshold value, this threshold value is an empirical value.Wherein, geodesic distance is the length that the space connects the shortest path of 2 points (or two set), and the characteristic point among the terminal characteristic point set V that calculates by said method is 5 characteristic points that identify.
Because manikin has symmetry, the distance of head end to 2 a upper limbs end is equal, and the distance to 2 lower limb ends also equates equally.The embodiment of the invention can be utilized this characteristic, and automatic lifting takes out the terminal characteristic point Vhead of head from 5 terminal characteristic points.
Step S223, generate 5 bone center lines according to described 5 characteristic points.
In the present embodiment, at first take the terminal characteristic point Vhead of head as starting point, determine successively the geodesic distance equivalent curve of N level, the distance that every layer of equivalent curve differs is d.After trying to achieve the distance that every layer of equivalent curve differ, just can take the terminal characteristic point Vhead of head as starting point, try to achieve the altogether geodesic distance equivalent curve function of N layer of whole manikin.When the number of plies increases gradually, when the geodesic distance equivalent curve that certain one deck occurs for the first time has 3 (closed curve that two geodesic distance equivalent curves form is little ellipse), just can determine that wherein two less closed curves of girth are the line of demarcation of arm and trunk.When the geodesic distance equivalent curve of certain one deck has 4, can determine that two larger closed curves of girth are the line of demarcation of shank and trunk, can distinguish four limbs and torso portion accordingly.
After determining the scope of human limb, respectively take the terminal characteristic point of four limbs as starting point, increasing progressively d according to every layer comes the four limbs part is calculated respectively the geodesic distance equivalent curve, the such bone center line on can be the better vertical four limbs in the geodesic distance equivalent curve cross section on the four limbs, cross section profile more meets medical science indication cross section, therefore can utilize more accurately the seemingly circle sex determination artis position in cross section when the bending of bone center line is very little.At last these adjacent geodesic distance equivalent curve centers are coupled together and just generate 5 bone center lines, just can determine at center line afterwards the position of artis.
Step S224, determine the position of artis according to described 5 bone center lines, with the position of each artis characteristic information as this key frame.
In the present embodiment, determine the bone center line after, can determine with the minimizing method of each section of center line angle the position of artis, with the position of each artis characteristic information as this frame of video.Suppose that i geodesic distance equivalent curve center is C i, line segment C i-t C iWith line segment C i+ t C iAngle be geodesic distance equivalent curve center angle.The size of t is definite according to the geodesic distance equivalent curve number of plies that the bone center line is divided, and the general more t of the number of plies are larger, and the effect of t is in order to reduce the calculating of local data's influence of fluctuations angle.
In step S102, the characteristic information of Target key frames and the characteristic information of the source key frame of video that generates are in advance compared, obtain the matching degree of target video and source video.
In the present embodiment, the characteristic information of source key frame of video is to extract according to the extracting method identical with the characteristic information of target video key frame.Both characteristic informations of key frame are compared, and when both difference during greater than default difference threshold value, expression learner's action and the action of instruction film are inconsistent, in addition, also can according to difference value, provide corresponding study mark to the learner.
The present embodiment is analyzed teaching source video in advance, and the learner is in study, and application software only needs analytic learning person's target video has avoided analyzing simultaneously the situation of two-path video, and the CPU computing that alleviates half consumes.In addition, the coupling between target video and the source video is not that each pixel in the frame of video is mated, and just mates for the characteristic information in two frame of video, and matching speed is faster.In addition, to the source video, according to the speed of the human motion of source video, frame period i is got in setting, for example, for the Yoga video, the speed of motion is relatively slow, then the i value can be got a relatively large value, 1 second even longer time, get a frame frame of video and carry out feature information extraction, rather than each frame is all extracted, in certain accuracy rating, the operation time of feature extraction can be saved at double like this.And in the present embodiment, be not that every pair of frame of video extracting is compared, human figure changes maximum frame of video as key frame but go out wherein according to the change detection of the characteristic information of frame of video, only key frame is compared, in certain accuracy rating, also the operation time of feature extraction can be saved at double.
One of ordinary skill in the art will appreciate that all or part of step that realizes in the various embodiments described above method is to come the relevant hardware of instruction to finish by program, corresponding program can be stored in the computer read/write memory medium, described storage medium is such as ROM/RAM, disk or CD etc.
Embodiment two
Fig. 8 shows the concrete structure block diagram of the video matching device that the embodiment of the invention two provides, and for convenience of explanation, only shows the part relevant with the embodiment of the invention.This video matching device can be the unit that is built in software unit, hardware cell or software and hardware combining in computer, television set or the portable terminal, and this video matching device comprises: characteristic acquisition unit 51 and matching degree acquiring unit 52.
Wherein, characteristic acquisition unit 51 is used for extracting all key frames of target video, and preserves the characteristic information of each Target key frames;
Matching degree acquiring unit 52 is compared with the characteristic information of the source key frame of video that generates in advance for the characteristic information of each Target key frames that described characteristic acquisition unit 51 is preserved, and obtains the matching degree of target video and source video.
Concrete, described characteristic acquisition unit 51 comprises:
The frame of video read module is used for reading current video frame and the next frame frame of video of target video, and with described current video frame as the first key frame;
The characteristic information acquisition module is used for obtaining the characteristic information of the next frame frame of video that current video frame that described frame of video read module reads and described frame of video read module read;
The matching degree acquisition module, characteristic information for the two frame frame of video that described characteristic information acquisition module is obtained compares, draw matching degree, if matching degree is lower than default matching threshold, described next frame of video that then will read is as the second key frame, otherwise continues to read the next frame frame of video.
Among this embodiment, in advance teaching source video is analyzed by described characteristic acquisition unit 51 and matching degree acquiring unit 52, the learner is in study, application software only needs analytic learning person's target video, avoided analyzing simultaneously the situation of two-path video, the CPU computing that alleviates half consumes.
Concrete, as an execution mode, described characteristic information acquisition module 51 comprises:
The first background erase submodule is used for that a frame key frame is carried out background erase and processes, and stays manikin;
The rectangular area obtains submodule, is used for horizontal vertical and scans described the first background erase submodule and one frame key frame is carried out the manikin that stays after background erase is processed, draws the rectangular area at human body place;
Contour division submodule, contour is done in the rectangular area that is used for described rectangular area is obtained the human body place that submodule draws from top to bottom n five equilibrium;
Contour information obtains submodule, is used for obtaining every isocontour characteristic information of the contour that described contour division submodule obtains, with the characteristic information of every isocontour characteristic information as this key frame.
Among this embodiment, describedly state characteristic information acquisition module 51 carries out frame of video according to every isocontour characteristic information of the key frame that obtains coupling, the learner is in study, every isocontour characteristic information of the key frame of application software only needs analytic learning person's target video, avoided analyzing simultaneously the situation of two-path video, the CPU computing that alleviates half consumes.
As another execution mode, described characteristic information acquisition module 51 comprises:
The second background erase submodule is used for that a frame key frame is carried out background erase and processes, and stays manikin;
Characteristic point is obtained submodule, is used for utilizing described the second background erase submodule that one frame key frame is carried out background erase and processes the geometrical relationship of geodesic distance between each summit of manikin that stays and identify 5 characteristic points that human body is positioned at four limbs and the crown;
The bone center line generates submodule, is used for obtaining 5 bone center lines of 5 characteristic points generations that submodule identifies according to described characteristic point;
Artis position acquisition submodule is used for generating the position that 5 bone center lines that submodule generates are determined artis according to described bone center line, with the position of each artis characteristic information as this key frame.
Among this embodiment, describedly state characteristic information acquisition module 51 is determined each artis of human body according to 5 characteristic points that identify from key frame position, the learner is in study, characteristic information in the key frame of application software only needs analytic learning person's target video, avoided analyzing simultaneously the situation of two-path video, the CPU computing that alleviates half consumes.
The present embodiment is analyzed teaching source video in advance, and the learner is in study, and application software only needs analytic learning person's target video has avoided analyzing simultaneously the situation of two-path video, and the CPU computing that alleviates half consumes.In addition, the coupling between target video and the source video is not that each pixel in the frame of video is mated, and just mates for the characteristic information in two frame of video, and matching speed is faster.In addition, to the source video, according to the speed of the human motion of source video, frame period i is got in setting, for example, for the Yoga video, the speed of motion is relatively slow, then the i value can be got a relatively large value, 1 second even longer time, get a frame frame of video and carry out feature information extraction, rather than each frame is all extracted, in certain accuracy rating, the operation time of feature extraction can be saved at double like this.And, the present embodiment is not that every pair of frame of video extracting is compared, human figure changes maximum frame of video as key frame but go out wherein according to the change detection of the characteristic information of frame of video, only key frame is compared, in certain accuracy rating, also the operation time of feature extraction can be saved at double.
The video matching device that the embodiment of the invention provides can be applied in the embodiment of the method one of aforementioned correspondence, and details do not repeat them here referring to the description of above-described embodiment one.
It should be noted that among the said system embodiment, included unit is just divided according to function logic, but is not limited to above-mentioned division, as long as can realize corresponding function; In addition, the concrete title of each functional unit also just for the ease of mutual differentiation, is not limited to protection scope of the present invention.
The above only is preferred embodiment of the present invention, not in order to limiting the present invention, all any modifications of doing within the spirit and principles in the present invention, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (8)

1. a video matching method is characterized in that, described method comprises:
Extract all key frames of target video, and preserve the characteristic information of each Target key frames;
The characteristic information of described Target key frames and the characteristic information of the source key frame of video that generates are in advance compared, obtain the matching degree of target video and source video.
2. the method for claim 1 is characterized in that, all key frames of described extraction target video specifically comprise:
Read the current video frame of target video, and with described current video frame as the first key frame;
Obtain the characteristic information of described the first key frame;
Read the next frame frame of video;
Obtain the characteristic information of described next frame frame of video;
Characteristic information in the two frame frame of video is compared, draw matching degree;
If matching degree is lower than default matching threshold, described next frame of video that then will read is as the second key frame, otherwise continues to read the next frame frame of video, until all frame of video in the target video read complete.
3. method as claimed in claim 2 is characterized in that, the characteristic information that obtains key frame comprises:
One frame key frame is carried out background erase process, stay manikin;
Horizontal vertical scans described manikin, draws the rectangular area at human body place;
With described rectangular area from top to bottom the n five equilibrium do contour;
Obtain every isocontour characteristic information, with the characteristic information of every isocontour characteristic information as this key frame.
4. method as claimed in claim 2 is characterized in that, the characteristic information that obtains key frame comprises:
One frame key frame is carried out background erase process, stay manikin;
Utilize the geometrical relationship of geodesic distance between each summit of manikin to identify 5 characteristic points that human body is positioned at four limbs and the crown;
Generate 5 bone center lines according to described 5 characteristic points;
Determine the position of artis according to described 5 bone center lines, with the position of each artis characteristic information as this key frame.
5. a video matching device is characterized in that, described device comprises:
Characteristic acquisition unit is used for extracting all key frames of target video, and preserves the characteristic information of each Target key frames;
The matching degree acquiring unit is used for the characteristic information of described Target key frames and the characteristic information of the source key frame of video that generates are in advance compared, and obtains the matching degree of target video and source video.
6. device as claimed in claim 5 is characterized in that, described feature information extraction unit comprises:
The frame of video read module is used for reading current video frame and the next frame frame of video of target video, and with described current video frame as the first key frame;
The characteristic information acquisition module is for the characteristic information that obtains described the first key frame and described next frame frame of video;
The matching degree acquisition module, be used for the characteristic information of two frame frame of video is compared, draw matching degree, if matching degree is lower than default matching threshold, described next frame of video that then will read is as the second key frame, otherwise continue to read the next frame frame of video, until all frame of video in the target video read complete.
7. device as claimed in claim 6 is characterized in that, described characteristic information acquisition module comprises:
The first background erase submodule is used for that a frame key frame is carried out background erase and processes, and stays manikin;
The rectangular area obtains submodule, is used for horizontal vertical and scans described manikin, draws the rectangular area at human body place;
Contour division submodule, be used for described rectangular area from top to bottom the n five equilibrium do contour;
Contour information obtains submodule, is used for obtaining every isocontour characteristic information, with the characteristic information of every isocontour characteristic information as this key frame.
8. device as claimed in claim 6 is characterized in that, described characteristic information acquisition module comprises:
The second background erase submodule is used for that a frame key frame is carried out background erase and processes, and stays manikin;
Characteristic point is obtained submodule, is used for utilizing the geometrical relationship of geodesic distance between each summit of manikin to identify 5 characteristic points that human body is positioned at four limbs and the crown;
The bone center line generates submodule, is used for generating 5 bone center lines according to described 5 characteristic points;
Artis position acquisition submodule is used for determining according to described 5 bone center lines the position of artis, with the position of each artis characteristic information as this key frame.
CN201310268664.1A 2013-06-28 2013-06-28 A kind of video matching method, device Expired - Fee Related CN103327356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310268664.1A CN103327356B (en) 2013-06-28 2013-06-28 A kind of video matching method, device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310268664.1A CN103327356B (en) 2013-06-28 2013-06-28 A kind of video matching method, device

Publications (2)

Publication Number Publication Date
CN103327356A true CN103327356A (en) 2013-09-25
CN103327356B CN103327356B (en) 2016-02-24

Family

ID=49195846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310268664.1A Expired - Fee Related CN103327356B (en) 2013-06-28 2013-06-28 A kind of video matching method, device

Country Status (1)

Country Link
CN (1) CN103327356B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104038848A (en) * 2014-05-30 2014-09-10 无锡天脉聚源传媒科技有限公司 Video processing method and video processing device
WO2016107226A1 (en) * 2014-12-29 2016-07-07 深圳Tcl数字技术有限公司 Image processing method and apparatus
CN109801193A (en) * 2017-11-17 2019-05-24 深圳市鹰硕音频科技有限公司 It is a kind of to follow tutoring system with Speech Assessment function
CN113537162A (en) * 2021-09-15 2021-10-22 北京拓课网络科技有限公司 Video processing method and device and electronic equipment
CN113678137A (en) * 2019-08-18 2021-11-19 聚好看科技股份有限公司 Display device
CN115979350A (en) * 2023-03-20 2023-04-18 北京航天华腾科技有限公司 Data acquisition system of ocean monitoring equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101374234A (en) * 2008-09-25 2009-02-25 清华大学 Method and apparatus for monitoring video copy base on content
CN101394522A (en) * 2007-09-19 2009-03-25 中国科学院计算技术研究所 Detection method and system for video copy
US20100036781A1 (en) * 2008-08-07 2010-02-11 Electronics And Telecommunications Research Institute Apparatus and method providing retrieval of illegal motion picture data
US20120177296A1 (en) * 2011-01-07 2012-07-12 Alcatel-Lucent Usa Inc. Method and apparatus for comparing videos

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101394522A (en) * 2007-09-19 2009-03-25 中国科学院计算技术研究所 Detection method and system for video copy
US20100036781A1 (en) * 2008-08-07 2010-02-11 Electronics And Telecommunications Research Institute Apparatus and method providing retrieval of illegal motion picture data
CN101374234A (en) * 2008-09-25 2009-02-25 清华大学 Method and apparatus for monitoring video copy base on content
US20120177296A1 (en) * 2011-01-07 2012-07-12 Alcatel-Lucent Usa Inc. Method and apparatus for comparing videos

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104038848A (en) * 2014-05-30 2014-09-10 无锡天脉聚源传媒科技有限公司 Video processing method and video processing device
WO2016107226A1 (en) * 2014-12-29 2016-07-07 深圳Tcl数字技术有限公司 Image processing method and apparatus
CN105809653A (en) * 2014-12-29 2016-07-27 深圳Tcl数字技术有限公司 Image processing method and device
CN105809653B (en) * 2014-12-29 2019-01-01 深圳Tcl数字技术有限公司 Image processing method and device
CN109801193A (en) * 2017-11-17 2019-05-24 深圳市鹰硕音频科技有限公司 It is a kind of to follow tutoring system with Speech Assessment function
CN109801193B (en) * 2017-11-17 2020-09-15 深圳市鹰硕教育服务股份有限公司 Follow-up teaching system with voice evaluation function
CN113678137A (en) * 2019-08-18 2021-11-19 聚好看科技股份有限公司 Display device
CN113678137B (en) * 2019-08-18 2024-03-12 聚好看科技股份有限公司 Display apparatus
CN113537162A (en) * 2021-09-15 2021-10-22 北京拓课网络科技有限公司 Video processing method and device and electronic equipment
CN115979350A (en) * 2023-03-20 2023-04-18 北京航天华腾科技有限公司 Data acquisition system of ocean monitoring equipment

Also Published As

Publication number Publication date
CN103327356B (en) 2016-02-24

Similar Documents

Publication Publication Date Title
Zhou et al. Matnet: Motion-attentive transition network for zero-shot video object segmentation
CN103327356A (en) Video matching method and device
CN108121986B (en) Object detection method and device, computer device and computer readable storage medium
Kliper-Gross et al. Motion interchange patterns for action recognition in unconstrained videos
CN102207950B (en) Electronic installation and image processing method
CN111310731A (en) Video recommendation method, device and equipment based on artificial intelligence and storage medium
WO2017092679A1 (en) Eyeball tracking method and apparatus, and device
CN112784763B (en) Expression recognition method and system based on local and overall feature adaptive fusion
CN104715256A (en) Auxiliary calligraphy exercising system and evaluation method based on image method
CN106874826A (en) Face key point-tracking method and device
CN106469298A (en) Age recognition methodss based on facial image and device
CN103198292A (en) Face feature vector construction
CN100561505C (en) A kind of image detecting method and device
CN103729614A (en) People recognition method and device based on video images
CN107766864B (en) Method and device for extracting features and method and device for object recognition
CN106407978B (en) Method for detecting salient object in unconstrained video by combining similarity degree
CN112949440A (en) Method for extracting gait features of pedestrian, gait recognition method and system
CN106156777A (en) Textual image detection method and device
Li et al. Primary video object segmentation via complementary cnns and neighborhood reversible flow
CN110852257A (en) Method and device for detecting key points of human face and storage medium
CN112419326B (en) Image segmentation data processing method, device, equipment and storage medium
KR20190080388A (en) Photo Horizon Correction Method based on convolutional neural network and residual network structure
Yu et al. Key point detection by max pooling for tracking
CN114565755B (en) Image segmentation method, device, equipment and storage medium
CN111311602A (en) Lip image segmentation device and method for traditional Chinese medicine facial diagnosis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160224

CF01 Termination of patent right due to non-payment of annual fee