CN103327356B - A kind of video matching method, device - Google Patents

A kind of video matching method, device Download PDF

Info

Publication number
CN103327356B
CN103327356B CN201310268664.1A CN201310268664A CN103327356B CN 103327356 B CN103327356 B CN 103327356B CN 201310268664 A CN201310268664 A CN 201310268664A CN 103327356 B CN103327356 B CN 103327356B
Authority
CN
China
Prior art keywords
frame
video
characteristic information
key
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310268664.1A
Other languages
Chinese (zh)
Other versions
CN103327356A (en
Inventor
孙茂杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201310268664.1A priority Critical patent/CN103327356B/en
Publication of CN103327356A publication Critical patent/CN103327356A/en
Application granted granted Critical
Publication of CN103327356B publication Critical patent/CN103327356B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention is applicable to video technique field, provides a kind of video matching method, device, and described method comprises: all key frames extracting target video, and preserves the characteristic information of each Target key frames; The characteristic information of the characteristic information of described Target key frames with the source key frame of video generated in advance is compared, obtains the matching degree of target video and source video.The present invention, adopt and analyze teaching source video in advance, learner is when study, and the target video of an application software Water demand learner, avoids the situation simultaneously will analyzing two-path video, and the CPU computing alleviating half consumes.

Description

A kind of video matching method, device
Technical field
The invention belongs to video technique field, particularly relate to a kind of video matching method, device.
Background technology
At present, the video study application software of research staff's exploitation, as golf study application software, generally play golf video instruction film, caught the video learning, imitate the learner of action in instruction film by camera simultaneously, adopt the action of the video frame image of contrast teaching sheet and learner frame by frame, carry out guidance learning person.But usual this direct two width video requency frame data contrasts, a large amount of central processing unit (CentralProcessingUnit can be spent, CPU) resource, at television set (television, etc. TV) on the elementary CPU that electronic equipment for consumption uses, contrast speed can be slow, cause TV or other electronic equipment for consumption response speeds slower.
Summary of the invention
Embodiments provide a kind of video matching method, device, be intended to solve the problem that prior art can spend a large amount of CPU computing to consume when the action of the video frame image of contrast teaching sheet and learner frame by frame.
On the one hand, provide a kind of video matching method, described method comprises:
Extract all key frames of target video, and preserve the characteristic information of each Target key frames;
The characteristic information of the characteristic information of described Target key frames with the source key frame of video generated in advance is compared, obtains the matching degree of target video and source video.
Further, the key frame of described extraction target video specifically comprises:
Read the current video frame of target video, and using described current video frame as the first key frame;
Obtain the characteristic information of described first key frame;
Read next frame frame of video;
Obtain the characteristic information of described next frame frame of video;
Characteristic information in two frame frame of video is compared, draws matching degree;
If matching degree is lower than the matching threshold preset, then using next frame of video described in reading as the second key frame, otherwise continue to read next frame frame of video, until all frame of video in target video read complete.
Further, the characteristic information of described acquisition frame of video comprises:
Background erase process is carried out to a frame key frame, leaves manikin;
Horizontal vertical scans described manikin, draws the rectangular area at human body place;
By described rectangular area from top to bottom n decile do contour;
Obtain the isocontour characteristic information of every bar, using the characteristic information of isocontour for every bar characteristic information as this key frame.
Further, the characteristic information of described acquisition frame of video comprises:
Background erase process is carried out to a frame key frame, leaves manikin;
Utilize the geometrical relationship of geodesic distance between manikin each summit to identify 5 characteristic points that human body is positioned at four limbs and the crown;
5 bone center lines are generated according to described 5 characteristic points;
According to the position of described 5 bone center line determination artiss, using the characteristic information of the position of each artis as this key frame.
On the other hand, provide a kind of video matching device, described device comprises:
Characteristic acquisition unit, for extracting all key frames of target video, and preserves the characteristic information of each Target key frames;
Matching degree acquiring unit, for being compared by the characteristic information of the characteristic information of described Target key frames with the source key frame of video generated in advance, obtains the matching degree of target video and source video.
Further, described feature information extraction unit comprises:
Frame of video read module, for reading current video frame and the next frame frame of video of target video, and using described current video frame as the first key frame;
Characteristic information acquisition module, for obtaining the characteristic information of described first key frame and described next frame frame of video;
Matching degree acquisition module, for the characteristic information in two frame frame of video is compared, draw matching degree, if matching degree is lower than the matching threshold preset, then using reading described in next frame of video as the second key frame, otherwise continue to read next frame frame of video, until all frame of video in target video read complete.
Further, described characteristic information acquisition module comprises:
First background erase submodule, for carrying out background erase process to a frame key frame, leaves manikin;
Rectangular area obtains submodule, scans described manikin, draw the rectangular area at human body place for horizontal vertical;
Contour division submodule, for by described rectangular area from top to bottom n decile do contour;
Contour information obtains submodule, for obtaining the isocontour characteristic information of every bar, using the characteristic information of isocontour for every bar characteristic information as this key frame.
Further, described characteristic information acquisition module comprises:
Second background erase submodule, for carrying out background erase process to a frame key frame, leaves manikin;
Characteristic point obtains submodule, identifies for utilizing the geometrical relationship of geodesic distance between manikin each summit 5 characteristic points that human body is positioned at four limbs and the crown;
Bone center line generates submodule, for generating 5 bone center lines according to described 5 characteristic points;
Artis position acquisition submodule, for the position according to described 5 bone center line determination artiss, using the characteristic information of the position of each artis as this key frame.
In the embodiment of the present invention, adopt and analyze teaching source video in advance, learner is when study, and the target video of an application software Water demand learner, avoids the situation simultaneously will analyzing two-path video, and the CPU computing alleviating half consumes.In addition, the coupling between target video and source video is not mate each pixel in frame of video, and just mates for the characteristic information in two frame of video, and matching speed is faster.Further, to source video, according to the speed of the human motion of source video, frame period i is got in setting, such as, for Yoga video, the speed of motion is relatively slow, then i value can be got a relatively large value, within 1 second, even for more time, get a frame frame of video and carry out feature information extraction, instead of each frame is all extracted, like this in certain accuracy rating, the operation time of feature extraction can be saved at double.Further, be not that the often pair of frame of video extracted is compared, but extract wherein human figure and change maximum frame of video as key frame, only key frame is compared, in certain accuracy rating, can save the operation time of feature extraction at double yet.
Accompanying drawing explanation
Fig. 1 is the realization flow figure of the video matching method that the embodiment of the present invention one provides;
Fig. 2 is the realization flow figure of all key frames of the extraction target video that the embodiment of the present invention one provides;
Fig. 3 is the realization flow figure of the extracting method of the characteristic information of a kind of key frame that the embodiment of the present invention one provides;
Fig. 4 is the realization flow figure of the extracting method of the characteristic information of the another kind of key frame that the embodiment of the present invention one provides;
Fig. 5 be the embodiment of the present invention one provide background erase process is carried out to a frame frame of video after, the manikin schematic diagram stayed;
Fig. 6 is the rectangular area schematic diagram at the human body place that the embodiment of the present invention one provides;
Fig. 7 is the contour schematic diagram that the rectangular area at the human body place that the embodiment of the present invention one provides is done by decile from top to bottom;
Fig. 8 is the structured flowchart of the video matching device that the embodiment of the present invention two provides.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
In embodiments of the present invention, first extract the key frame of target video, and record the characteristic information of each Target key frames; Again the characteristic information of the characteristic information of Target key frames with the source key frame of video generated in advance is compared, to draw the match condition between target video frame and source frame of video.
Below in conjunction with specific embodiment, realization of the present invention is described in detail:
Embodiment one
Fig. 1 shows the realization flow of the video matching method that the embodiment of the present invention one provides, and details are as follows:
In step S101, extract all key frames of target video, and preserve the characteristic information of each Target key frames.
Wherein, target video is the video learning, imitating the learner of action in instruction film, and the flow process extracting all key frames of target video as shown in Figure 2, specifically comprises:
The current video frame of step S201, reading target video, and using described current video frame as the first key frame;
Step S202, obtain the characteristic information of described first key frame;
Step S203, reading next frame frame of video;
Step S204, obtain the characteristic information of described next frame frame of video;
Step S205, the characteristic information in two frame frame of video to be compared, draw matching degree;
If step S206 matching degree is lower than the matching threshold preset, then using next frame of video described in reading as the second key frame, otherwise continue to read next frame frame of video, until all frame of video in target video read complete.
In the present embodiment, using the first frame frame of video of occurring in target video as key frame, then next frame frame of video is read, characteristic information in two frames is compared, show that matching degree is (if the characteristic information of two frames is identical, then matching degree is 1), if matching degree is lower than the matching threshold preset, then think that the human figure in two frames there occurs larger change, then using read next frame of video as key frame, otherwise continue to read next frame frame of video, until all frame of video in target video read complete.It should be noted that: video each of frame second 30, do not need reading one by one, one second can be set and read a frame, here next frame refers to the frame of video of next second, two seconds can certainly be set and read a frame, do not limit at this, this method, in certain accuracy rating, also can save the operation time of feature extraction at double.
Wherein, the extraction of the characteristic information of key frame can be realized by following steps, idiographic flow as shown in Figure 3:
Step S211, background erase process is carried out to a frame key frame, leave manikin, as shown in Figure 4.
Step S212, horizontal vertical scan, and draw the rectangular area at human body place, and calculate the ratio of width to height relation of described rectangular area.
The rectangular area at human body place is also referred to as area-of-interest, as shown in Figure 5, record the wide high proportion relation of area-of-interest, when comparing with source frame of video afterwards, if the wide high proportion relation of the area-of-interest of source to be compared frame of video meets default error range, then proceed contrast, otherwise assertive goal frame of video is not mated with source frame of video.Wherein, source frame of video is the frame of video of instruction film.
Step S213, by area-of-interest from top to bottom n decile do contour.
Step S214, obtain the isocontour characteristic information of every bar, using the characteristic information of isocontour for every bar characteristic information as this key frame.
As shown in Figure 6, n=8.Each contour can be truncated into the alternate line segment of black (background) white (human body), record the proportionate relationship of the line segment that every bar contour is truncated, (0 to represent background colour black to note the color of mark Article 1 line segment, 1 to represent human body white), isocontour information as the highest in figure below may be (0,0.5,0.2,0.3) characteristic information of each key frame can, be obtained thus.Wherein, the characteristic information of key frame, except comprising the isocontour information of each key frame, also comprises the time of the quantity of key frame, the appearance of each key frame.
In addition, the characteristic information of key frame also can be obtained by following step as shown in Figure 7, comprising:
Step S221, background erase process is carried out to a frame key frame, leave manikin, as shown in Figure 4.
Step S222, utilize the geometrical relationship of geodesic distance between manikin each summit to identify 5 characteristic points that human body is positioned at four limbs and the crown.
In the present embodiment, take up an official post at manikin and get a bit, solve the vertex v maximum with this geodesic distance, using vertex v as one of End features point, be added into End features point set V.Then the point that search is maximum with the geodesic distance sum of each point in set V in manikin, as new End features point, until new End features point is less than default threshold value with the End features point geodesic distance in set V, this threshold value is an empirical value.Wherein, geodesic distance is the length of the shortest path of spatial joins two point (or two set), and the characteristic point in the End features calculated by said method point set V is 5 characteristic points identified.
Because manikin has symmetry, head end is equal to the distance of 2 upper limbs ends, and the distance equally to 2 lower limb ends is also equal.The embodiment of the present invention can utilize this characteristic, automatically extracts the End features point Vhead of head from 5 End features points.
Step S223, generate 5 bone center lines according to described 5 characteristic points.
In the present embodiment, first with the End features of head point Vhead for starting point, determine the geodesic distance equivalent curve of N number of level successively, the distance of every layer of equivalent curve difference is d.After trying to achieve the distance of every layer of equivalent curve difference, just with the End features of head point Vhead for starting point, the geodesic distance equivalent curve function that whole manikin is total to N layer can be tried to achieve.When the number of plies increases gradually, first time is when occurring that the geodesic distance equivalent curve of certain one deck has 3 (closed curve of two geodesic distance equivalent curves composition is micro-ellipse), just can determine that closed curve that wherein two girths are less is the line of demarcation of arm and trunk.When the geodesic distance equivalent curve of certain one deck has 4, can determine that closed curve that two girths are larger is the line of demarcation of leg and trunk, four limbs and torso portion can be distinguished accordingly.
After determining the scope of human limb, respectively with the End features of four limbs point for starting point, increase progressively d according to every layer and geodesic distance equivalent curve is calculated respectively to extremity portion, geodesic distance equivalent curve cross section on such four limbs can bone center line better on vertical four limbs, cross section profile more meets medical science indication cross section, and what therefore can utilize cross section more accurately when bone camber is very little judges artis position like round.Finally these adjacent geodesic distance equivalent curve centers are coupled together and just generate 5 bone center lines, just can determine the position of artis afterwards on centerline.
Step S224, position according to described 5 bone center line determination artiss, using the characteristic information of the position of each artis as this key frame.
In the present embodiment, after determining bone center line, the position of artis can be determined by center line each section of minimizing method of angle, using the characteristic information of the position of each artis as this frame of video.Suppose that i-th geodesic distance equivalent curve center is C i, line segment C i-tC iwith line segment C i+ tC iangle be geodesic distance equivalent curve center angle.The size of t according to bone center line the geodesic distance equivalent curve number of plies of dividing determine, more t are larger for the general number of plies, and the effect of t is the calculating in order to reduce local data's influence of fluctuations angle.
In step s 102, the characteristic information of the characteristic information of Target key frames with the source key frame of video generated in advance is compared, obtains the matching degree of target video and source video.
In the present embodiment, the characteristic information of source key frame of video extracts according to the extracting method identical with the characteristic information of target video key frame.The characteristic information of both key frames is contrasted, when both difference is greater than default discrepancy threshold, represents that the action of the action of learner and instruction film is inconsistent, in addition, also according to difference value, can provide to learner and learn mark accordingly.
The present embodiment is analyzed teaching source video in advance, and learner is when study, and the target video of an application software Water demand learner, avoids the situation simultaneously will analyzing two-path video, and the CPU computing alleviating half consumes.In addition, the coupling between target video and source video is not mate each pixel in frame of video, and just mates for the characteristic information in two frame of video, and matching speed is faster.In addition, to source video, according to the speed of the human motion of source video, frame period i is got in setting, such as, for Yoga video, the speed of motion is relatively slow, then i value can be got a relatively large value, within 1 second, even for more time, get a frame frame of video and carry out feature information extraction, instead of each frame is all extracted, like this in certain accuracy rating, the operation time of feature extraction can be saved at double.And in the present embodiment, be not that the often pair of frame of video extracted is compared, but go out wherein human figure according to the change detection of the characteristic information of frame of video and change maximum frame of video as key frame, only key frame is compared, in certain accuracy rating, also can save the operation time of feature extraction at double.
One of ordinary skill in the art will appreciate that all or part of step realized in the various embodiments described above method is that the hardware that can carry out instruction relevant by program has come, corresponding program can be stored in a computer read/write memory medium, described storage medium, as ROM/RAM, disk or CD etc.
Embodiment two
Fig. 8 shows the concrete structure block diagram of the video matching device that the embodiment of the present invention two provides, and for convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.This video matching device can be the unit of software unit, hardware cell or the software and hardware combining be built in computer, television set or mobile terminal, and this video matching device comprises: characteristic acquisition unit 51 and matching degree acquiring unit 52.
Wherein, characteristic acquisition unit 51, for extracting all key frames of target video, and preserves the characteristic information of each Target key frames;
Matching degree acquiring unit 52, the characteristic information for each Target key frames described characteristic acquisition unit 51 preserved is compared with the characteristic information of the source key frame of video generated in advance, obtains the matching degree of target video and source video.
Concrete, described characteristic acquisition unit 51 comprises:
Frame of video read module, for reading current video frame and the next frame frame of video of target video, and using described current video frame as the first key frame;
Characteristic information acquisition module, the characteristic information of the next frame frame of video that current video frame and described frame of video read module for obtaining the reading of described frame of video read module read;
Matching degree acquisition module, for the characteristic information in two frame frame of video of described characteristic information acquisition module acquisition is compared, draw matching degree, if matching degree is lower than the matching threshold preset, then using next frame of video described in reading as the second key frame, otherwise continue to read next frame frame of video.
In this embodiment, in advance teaching source video is analyzed by described characteristic acquisition unit 51 and matching degree acquiring unit 52, learner is when study, the target video of an application software Water demand learner, avoid the situation simultaneously will analyzing two-path video, the CPU computing alleviating half consumes.
Concrete, as an execution mode, described characteristic information acquisition module 51 comprises:
First background erase submodule, for carrying out background erase process to a frame key frame, leaves manikin;
Rectangular area obtains submodule, and the manikin stayed after carrying out background erase process for the described first background erase submodule of horizontal vertical scanning to a frame key frame, draws the rectangular area at human body place;
Contour division submodule, for described rectangular area is obtained the human body place that submodule draws rectangular area from top to bottom n decile do contour;
Contour information obtains submodule, for obtaining the isocontour characteristic information of every bar in contour that described contour division submodule obtains, using the characteristic information of isocontour for every bar characteristic information as this key frame.
In this embodiment, describedly state characteristic information acquisition module 51 carries out frame of video coupling according to the isocontour characteristic information of every bar of key frame obtained, learner is when study, the isocontour characteristic information of every bar of the key frame of the target video of an application software Water demand learner, avoid the situation simultaneously will analyzing two-path video, the CPU computing alleviating half consumes.
Alternatively, described characteristic information acquisition module 51 comprises:
Second background erase submodule, for carrying out background erase process to a frame key frame, leaves manikin;
Characteristic point obtains submodule, identifies 5 characteristic points that human body is positioned at four limbs and the crown for utilizing described second background erase submodule to the geometrical relationship that a frame key frame carries out geodesic distance between manikin each summit that background erase process stays;
Bone center line generates submodule, and 5 characteristic points identified for obtaining submodule according to described characteristic point generate 5 bone center lines;
Artis position acquisition submodule, for generating the position of 5 bone center line determination artiss that submodule generates according to described bone center line, using the characteristic information of the position of each artis as this key frame.
In this embodiment, describedly state the position of characteristic information acquisition module 51 according to identify from key frame 5 each artiss of characteristic point determination human body, learner is when study, characteristic information in the key frame of the target video of an application software Water demand learner, avoid the situation simultaneously will analyzing two-path video, the CPU computing alleviating half consumes.
The present embodiment is analyzed teaching source video in advance, and learner is when study, and the target video of an application software Water demand learner, avoids the situation simultaneously will analyzing two-path video, and the CPU computing alleviating half consumes.In addition, the coupling between target video and source video is not mate each pixel in frame of video, and just mates for the characteristic information in two frame of video, and matching speed is faster.In addition, to source video, according to the speed of the human motion of source video, frame period i is got in setting, such as, for Yoga video, the speed of motion is relatively slow, then i value can be got a relatively large value, within 1 second, even for more time, get a frame frame of video and carry out feature information extraction, instead of each frame is all extracted, like this in certain accuracy rating, the operation time of feature extraction can be saved at double.And, the present embodiment is not compare to the often pair of frame of video extracted, but go out wherein human figure according to the change detection of the characteristic information of frame of video and change maximum frame of video as key frame, only key frame is compared, in certain accuracy rating, also can save the operation time of feature extraction at double.
The video matching device that the embodiment of the present invention provides can be applied in the embodiment of the method one of aforementioned correspondence, and details, see the description of above-described embodiment one, do not repeat them here.
It should be noted that in said system embodiment, included unit is carry out dividing according to function logic, but is not limited to above-mentioned division, as long as can realize corresponding function; In addition, the concrete title of each functional unit, also just for the ease of mutual differentiation, is not limited to protection scope of the present invention.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (4)

1. a video matching method, is characterized in that, described method comprises:
Extract all key frames of target video, and preserve the characteristic information of each Target key frames;
The characteristic information of the characteristic information of described Target key frames with the source key frame of video generated in advance is compared, obtains the matching degree of target video and source video;
The characteristic information obtaining key frame comprises:
Background erase process is carried out to a frame key frame, leaves manikin;
Horizontal vertical scans described manikin, draws the rectangular area at human body place;
By described rectangular area from top to bottom n decile do contour;
Obtain the isocontour characteristic information of every bar, using the characteristic information of isocontour for every bar characteristic information as this key frame;
Or,
The characteristic information obtaining key frame comprises:
Background erase process is carried out to a frame key frame, leaves manikin;
Utilize the geometrical relationship of geodesic distance between manikin each summit to identify 5 characteristic points that human body is positioned at four limbs and the crown;
5 bone center lines are generated according to described 5 characteristic points;
According to the position of described 5 bone center line determination artiss, using the characteristic information of the position of each artis as this key frame.
2. the method for claim 1, is characterized in that, all key frames of described extraction target video specifically comprise:
Read the current video frame of target video, and using described current video frame as the first key frame;
Obtain the characteristic information of described first key frame;
Read next frame frame of video;
Obtain the characteristic information of described next frame frame of video;
Characteristic information in two frame frame of video is compared, draws matching degree;
If matching degree is lower than the matching threshold preset, then using the described next frame frame of video that reads as the second key frame, otherwise continue to read next frame frame of video, until all frame of video in target video read complete.
3. a video matching device, is characterized in that, described device comprises:
Characteristic acquisition unit, for extracting all key frames of target video, and preserves the characteristic information of each Target key frames;
Matching degree acquiring unit, for being compared by the characteristic information of the characteristic information of described Target key frames with the source key frame of video generated in advance, obtains the matching degree of target video and source video;
Described characteristic information acquisition module comprises:
First background erase submodule, for carrying out background erase process to a frame key frame, leaves manikin;
Rectangular area obtains submodule, scans described manikin, draw the rectangular area at human body place for horizontal vertical;
Contour division submodule, for by described rectangular area from top to bottom n decile do contour;
Contour information obtains submodule, for obtaining the isocontour characteristic information of every bar, using the characteristic information of isocontour for every bar characteristic information as this key frame;
Or,
Described characteristic information acquisition module comprises:
Second background erase submodule, for carrying out background erase process to a frame key frame, leaves manikin;
Characteristic point obtains submodule, identifies for utilizing the geometrical relationship of geodesic distance between manikin each summit 5 characteristic points that human body is positioned at four limbs and the crown;
Bone center line generates submodule, for generating 5 bone center lines according to described 5 characteristic points;
Artis position acquisition submodule, for the position according to described 5 bone center line determination artiss, using the characteristic information of the position of each artis as this key frame.
4. device as claimed in claim 3, it is characterized in that, described characteristic acquisition unit comprises:
Frame of video read module, for reading current video frame and the next frame frame of video of target video, and using described current video frame as the first key frame;
Characteristic information acquisition module, for obtaining the characteristic information of described first key frame and described next frame frame of video;
Matching degree acquisition module, for the characteristic information in two frame frame of video is compared, draw matching degree, if matching degree is lower than the matching threshold preset, then using read described next frame frame of video as the second key frame, otherwise continue to read next frame frame of video, until all frame of video in target video read complete.
CN201310268664.1A 2013-06-28 2013-06-28 A kind of video matching method, device Expired - Fee Related CN103327356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310268664.1A CN103327356B (en) 2013-06-28 2013-06-28 A kind of video matching method, device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310268664.1A CN103327356B (en) 2013-06-28 2013-06-28 A kind of video matching method, device

Publications (2)

Publication Number Publication Date
CN103327356A CN103327356A (en) 2013-09-25
CN103327356B true CN103327356B (en) 2016-02-24

Family

ID=49195846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310268664.1A Expired - Fee Related CN103327356B (en) 2013-06-28 2013-06-28 A kind of video matching method, device

Country Status (1)

Country Link
CN (1) CN103327356B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104038848A (en) * 2014-05-30 2014-09-10 无锡天脉聚源传媒科技有限公司 Video processing method and video processing device
CN105809653B (en) * 2014-12-29 2019-01-01 深圳Tcl数字技术有限公司 Image processing method and device
CN109801193B (en) * 2017-11-17 2020-09-15 深圳市鹰硕教育服务股份有限公司 Follow-up teaching system with voice evaluation function
CN113678137B (en) * 2019-08-18 2024-03-12 聚好看科技股份有限公司 Display apparatus
CN113537162B (en) * 2021-09-15 2022-01-28 北京拓课网络科技有限公司 Video processing method and device and electronic equipment
CN115979350A (en) * 2023-03-20 2023-04-18 北京航天华腾科技有限公司 Data acquisition system of ocean monitoring equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101374234A (en) * 2008-09-25 2009-02-25 清华大学 Method and apparatus for monitoring video copy base on content
CN101394522A (en) * 2007-09-19 2009-03-25 中国科学院计算技术研究所 Detection method and system for video copy

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100986223B1 (en) * 2008-08-07 2010-10-08 한국전자통신연구원 Apparatus and method providing retrieval of illegal movies
US8731292B2 (en) * 2011-01-07 2014-05-20 Alcatel Lucent Method and apparatus for comparing videos

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101394522A (en) * 2007-09-19 2009-03-25 中国科学院计算技术研究所 Detection method and system for video copy
CN101374234A (en) * 2008-09-25 2009-02-25 清华大学 Method and apparatus for monitoring video copy base on content

Also Published As

Publication number Publication date
CN103327356A (en) 2013-09-25

Similar Documents

Publication Publication Date Title
CN103327356B (en) A kind of video matching method, device
Li et al. Flow guided recurrent neural encoder for video salient object detection
Tang et al. Weakly supervised salient object detection with spatiotemporal cascade neural networks
Ji et al. Skeleton embedded motion body partition for human action recognition using depth sequences
Kliper-Gross et al. Motion interchange patterns for action recognition in unconstrained videos
Tang et al. Facial landmark detection by semi-supervised deep learning
CN106874826A (en) Face key point-tracking method and device
CN106469298A (en) Age recognition methodss based on facial image and device
CN101751668B (en) Method and device for detecting crowd density
CN102385695A (en) Human body three-dimensional posture identifying method and device
CN103729614A (en) People recognition method and device based on video images
CN104331151A (en) Optical flow-based gesture motion direction recognition method
Xu et al. Video salient object detection via robust seeds extraction and multi-graphs manifold propagation
CN106203277A (en) Fixed lens real-time monitor video feature extracting method based on SIFT feature cluster
CN109034099A (en) A kind of expression recognition method and device
Liu et al. Feedback-driven loss function for small object detection
CN101470809A (en) Moving object detection method based on expansion mixed gauss model
Huynh-The et al. Hierarchical topic modeling with pose-transition feature for action recognition using 3D skeleton data
Zhou et al. Classroom learning status assessment based on deep learning
Khurana et al. Deep learning approaches for human activity recognition in video surveillance-a survey
Li et al. Person re-identification with part prediction alignment
Zhao et al. Generalized symmetric pair model for action classification in still images
CN103714556A (en) Moving target tracking method based on pyramid appearance model
CN112257665A (en) Image content recognition method, image recognition model training method, and medium
CN103020614A (en) Human movement identification method based on spatio-temporal interest point detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160224