CN102693739A - Method and system for video clip generation - Google Patents
Method and system for video clip generation Download PDFInfo
- Publication number
- CN102693739A CN102693739A CN2011100722771A CN201110072277A CN102693739A CN 102693739 A CN102693739 A CN 102693739A CN 2011100722771 A CN2011100722771 A CN 2011100722771A CN 201110072277 A CN201110072277 A CN 201110072277A CN 102693739 A CN102693739 A CN 102693739A
- Authority
- CN
- China
- Prior art keywords
- interception
- video
- instruction
- intercepting
- capture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method and system for video clip generation. The method comprises the following steps: playing a video file; capturing facial expression data of a user in real time and comparing the captured facial expression data with typical expression element data in a facial feature database; and intercepting the video according to the comparison result to obtain a video clip. According to the method and system for video clip generation, when the video file is played, facial expression data of a user are collected in real time; the collected facial expression data are compared with the typical expression element data in the facial feature database; and according to the comparison result, the video is intercepted to obtain the video clip that corresponds to the typical expression element data. Therefore, a needed video clip can be intercepted accurately and a playing effect is not influenced.
Description
[ technical field ] A method for producing a semiconductor device
The present invention relates to video processing technologies, and in particular, to a method and a system for generating video clips.
[ background of the invention ]
In the conventional video clip generation method, a user specifies a time slice, such as a start time point and an end time point, and then video editing software intercepts and generates the original video according to the time slice specified by the user.
However, in the conventional video clip generation method, a user needs to manually specify a time slice, when finding that favorite clips want to be intercepted, the user is difficult to find a corresponding position, one video is often long, if a plurality of favorite clips need to be intercepted in the middle, one video needs to be found out, the operation is complicated, if seeing the favorite clips, the user stops watching and intercepting the favorite clips immediately, and although the position can be found, the effect of playing the video is influenced and the user is inconvenient to watch the favorite clips.
[ summary of the invention ]
Therefore, there is a need for a method for generating video clips, which can accurately capture the required video clips without affecting the playing effect.
A video clip generation method, comprising the steps of:
playing the video file;
capturing facial expression data of a user in real time, and comparing the captured facial expression data with typical expression element data in a facial feature database;
and intercepting the video according to the comparison result to obtain a video clip.
Preferably, the step of intercepting the video according to the comparison result to obtain a video clip is as follows: judging whether the facial expression data are typical expression element data or not, if so, generating an interception starting instruction, acquiring an interception type, carrying out interception operation on the video according to the interception type and the interception starting instruction, if not, further judging whether the interception operation is carried out, if so, generating an interception ending instruction, stopping the interception operation on the video according to the interception ending instruction, intercepting the video according to the interception starting instruction and the interception ending instruction, obtaining a video segment, and if not, ending the video segment.
Preferably, the intercepting type is a screenshot, and the intercepting the video according to the intercepting type, the intercepting start instruction and the intercepting end instruction specifically includes: and grabbing a picture and storing the picture under the set path.
Preferably, the capturing type is continuous shooting, and capturing the video according to the capturing type, the capturing start instruction, and the capturing end instruction specifically includes: and continuously grabbing a series of graphs at preset time intervals from the time point of receiving the interception starting instruction to the forward preset time.
Preferably, the intercepting type is animation, and the intercepting the video according to the intercepting type, the intercepting start instruction and the intercepting end instruction specifically comprises: and starting to intercept the video from the time point of receiving the interception starting instruction to the forward preset time, stopping intercepting until receiving an interception ending instruction, generating an animation, and storing the animation in a set path.
Preferably, the intercepting type is intercepting, and the intercepting the video according to the intercepting type, the intercepting start instruction, and the intercepting end instruction specifically includes: and starting to intercept the video from the time point of receiving the interception starting instruction to the forward preset time, stopping intercepting until receiving an interception ending instruction, obtaining a video clip, and storing the video clip under a set path.
In addition, it is necessary to provide a video clip generating system, which can accurately intercept the required video clip without affecting the playing effect.
A video clip generation system, comprising:
the video playing module is used for playing the video file;
a facial feature database for storing typical expression element data;
the facial expression capturing module is used for capturing facial expression data of the user in real time;
a comparison module for comparing the captured facial expression data with typical expression element data in the facial feature database;
and the intercepting module is used for intercepting the video according to the comparison result to obtain a video clip.
Preferably, the facial expression data acquisition device further comprises an instruction generation module, wherein the instruction generation module is configured to generate an interception start instruction and send the interception start instruction to the interception module when the comparison module compares that the facial expression data is typical expression element data, and is further configured to generate an interception end instruction and send the interception end instruction to the interception module when the comparison module compares that the facial expression data is not typical expression element data;
the intercepting module is further used for acquiring an intercepting type, and intercepting the video according to the intercepting type, the intercepting starting instruction and the intercepting ending instruction to obtain a video segment.
Preferably, the interception type is a screenshot, and the interception module is further configured to grab a graph and store the graph under the set path.
Preferably, the interception type is continuous shooting, and the interception module is further configured to continuously capture a series of graphs at predetermined time intervals from a time point when the interception start instruction is received to a predetermined time.
Preferably, the interception type is animation, and the interception module is further configured to start intercepting the video from a time point when the interception start instruction is received to a predetermined time, stop intercepting until the interception end instruction is received, generate animation, and store the animation in a set path.
Preferably, the interception type is interception, and the interception module is further configured to start intercepting the video from a time point when the interception start instruction is received to a predetermined time, stop intercepting until the interception end instruction is received, obtain a video segment, and store the video segment in a set path.
According to the video clip generation method and system, the video file is played, meanwhile, facial expression data of the user are collected in real time, the collected facial expression data are compared with typical expression element data in the facial feature database, the video is intercepted according to the comparison result, the video clip is obtained, and the video clip corresponding to the typical expression element data is obtained, so that the required video clip can be intercepted accurately, and the playing effect is not influenced.
[ description of the drawings ]
FIG. 1 is a flowchart illustrating a method for generating video clips according to an embodiment;
FIG. 2 is a flowchart of a video segment generation method according to a second embodiment;
FIG. 3 is a flowchart of a video clip generation method according to a third embodiment;
FIG. 4 is a flowchart of a video segment generation method according to a fourth embodiment;
FIG. 5 is a flowchart of a video segment generation method according to a fifth embodiment;
FIG. 6 is a flowchart of a video segment generation method according to a sixth embodiment;
FIG. 7 is a diagram showing the structure of a video clip generation system according to an embodiment;
fig. 8 is a schematic structural diagram of a video clip generation system in another embodiment.
[ detailed description ] embodiments
The following detailed description of the embodiments refers to the accompanying drawings.
As shown in fig. 1, in a first embodiment, a method for generating a video segment includes the following steps:
step S100, playing the video file.
The player is used to play video files, and the format of the video files can be MOV, music CD, MID, MP3, MP4, RAM, RA, MPG, VCD, DAT, SVCD or DVD, etc., but is not limited to these formats.
After step S100, it is determined whether the facial expression data automatic capturing function is turned on, if so, the facial expression data automatic capturing function is turned on, otherwise, the process is ended. In this way, the facial expression data automatic capturing function can be turned on according to the needs of the user. After the automatic facial expression data capturing function is started, the facial expression data of the user can be captured in real time.
In addition, when the facial expression data automatic capturing function is started, the preset facial expression of the user to be captured, such as laughing, crying, frightening and the like, can be acquired.
Meanwhile, the user can select the preset time, which can be a first preset time, a second preset time or a third preset time, so that the corresponding preset time can be calculated forward for capture according to the selection of the user. The first predetermined time, the second predetermined time, and the third predetermined time all refer to preset times calculated from a time point of intercepting the start instruction, and the preset times may be set by a user or a system, such as 20 seconds, 30 seconds, and the like. The first predetermined time, the second predetermined time, and the third predetermined time may be the same or different.
Step S110, capturing facial expression data of the user in real time, and comparing the captured facial expression data with typical expression element data in a facial feature database.
The method comprises the steps of capturing facial expression data of a user in real time by adopting video input equipment (such as a camera), specifically, judging according to the head part of a person, determining the head, judging head characteristics such as eyes and mouth, comparing the head characteristics with facial characteristics in a facial characteristic database, confirming that the head is a face, and completing facial expression data capture. The captured facial expression data is then compared to typical emoji data in a facial feature database. Typical facial expression data may be data of a smiling face or data of a crying face, or the like. Like the data of a smiling face, the degree of upward bending of the mouth and the degree of downward bending of the eyes of the captured facial expression data are judged, thereby judging whether or not the face smiles.
And step S120, intercepting the video according to the comparison result to obtain a video clip.
If the comparison result is that the captured facial expression data are typical expression element data, intercepting the video to obtain a video clip, and storing the video clip under a set path; and if the contrast result is that the captured facial expression data is not typical expression element data, video interception is not carried out or video interception is stopped.
As shown in fig. 2, in the second embodiment, a method for generating a video segment includes the following steps:
and step S200, playing the video file.
Step S210, capturing facial expression data of the user in real time, and comparing the captured facial expression data with typical expression element data in the facial feature database. Facial expression data of a user is captured by a video input tool such as a camera.
In step S220, it is determined whether the facial expression data is typical expression element data, if so, step S230 is executed, and if not, step S240 is executed.
And step S230, generating an interception starting instruction, acquiring an interception type, and intercepting the video according to the interception type and the interception starting instruction.
And when the facial expression data are typical expression element data, generating an interception starting instruction, acquiring an interception type, and carrying out interception operation on the video file. The interception types include, but are not limited to, a screenshot, a continuous shot, an animation, a video, and the like. The interception type is set by the user in advance.
Step S240, determining whether the intercepting operation is being performed, if so, performing step S250, otherwise, ending.
And step S250, generating an interception ending instruction, and stopping the video interception operation according to the interception ending instruction. And when the facial expression data are not the typical expression element data, generating an interception ending instruction, and stopping the interception operation according to the interception ending instruction.
And step S260, intercepting the video according to the interception type, the interception start instruction and the interception end instruction to obtain a video segment, and storing the video segment in a set path.
And intercepting a section of video by taking the time point of receiving the interception starting instruction as a starting point and the time point of receiving the interception ending instruction as an end point to obtain a video segment, and storing the video segment under a set path. The time point of receiving the interception start instruction may be advanced by a certain time as the starting point of interception. The forward-pushing time refers to a certain time set by a user or a system, and videos in the time and videos which start to be intercepted may have certain association.
In a third embodiment, as shown in fig. 3, when the type of capture is a screenshot, a method for generating a video clip includes the following steps:
step S300, playing the video file.
Step S310, capturing facial expression data of the user in real time, and comparing the captured facial expression data with typical expression element data in a facial feature database. Facial expression data of a user is captured by a video input tool such as a camera.
In step S320, it is determined whether the facial expression data is typical expression element data, if yes, step S330 is performed, otherwise, step S340 is performed.
And step S330, generating an interception starting instruction, acquiring the interception type as a screenshot, capturing a picture, and storing the picture in a set path.
And step 340, judging whether the screenshot operation is performed, if so, executing step 350, and otherwise, ending.
In step S350, an interception end instruction is generated.
In a fourth embodiment, as shown in fig. 4, when the capture type is continuous shooting, a method for generating a video segment includes the following steps:
step S400, playing the video file.
Step S410, capturing facial expression data of the user in real time, and comparing the captured facial expression data with typical expression element data in the facial feature database. Facial expression data of a user is captured by a video input tool such as a camera.
In step S420, it is determined whether the facial expression data is typical expression element data, if so, step S430 is performed, otherwise, step S440 is performed.
And step S430, generating an interception starting instruction, and acquiring the interception type as continuous shooting.
Step S440, determining whether the continuous shooting operation is being performed, if yes, executing step S450, otherwise, ending.
Step S450, generating an interception ending instruction.
Step S460, from the time point when the interception start instruction is received, pushing forward for the first predetermined time, continuously capturing a series of graphs at predetermined time intervals, and storing the graphs in the set path.
When the interception type is continuous shooting, continuously capturing a series of graphs according to a preset time interval from the time point of receiving an interception starting instruction to forward a first preset time, or intercepting a specified number of graphs from the time point of receiving the interception starting instruction to forward the first preset time until the time point of receiving an interception ending instruction, and forming the intercepted specified number of graphs into a whole continuous shooting set. Considering that the facial expression data is the same as the typical expression element data, the video file content at the time point of receiving the interception start instruction is closely connected with the video file content pushed forward for the first preset time, so that the video file content pushed forward for a certain time is captured together and stored under the set path. In this embodiment, the preset time selected by the user is the first preset time, and the intercepting operation is started after the first preset time is pushed forward from the time point of the intercepting start instruction.
In the fifth embodiment, as shown in fig. 5, when the cut type is animation, a video clip generating method includes the following steps:
step S500, playing the video file.
Step S510, capturing facial expression data of the user in real time, and comparing the captured facial expression data with typical expression element data in the facial feature database. Facial expression data of a user is captured by a video input tool such as a camera.
In step S520, it is determined whether the facial expression data is typical expression element data, if yes, step S530 is performed, otherwise, step S540 is performed.
Step S530, generating an interception starting instruction, and acquiring that the interception type is animation.
And step S540, judging whether animation operation is performed, if so, executing step S550, and if not, ending.
In step S550, an interception end instruction is generated.
And step S560, starting to intercept the video from the time point when the interception start instruction is received to the second preset time, stopping interception until the interception end instruction is received, generating animation, and storing the animation in the set path.
And (4) from the time point of receiving the interception starting instruction to the second preset time, and to the time point of receiving the interception ending instruction, intercepting the video of the period of time and generating the GIF animation or animation in other formats. In this embodiment, the preset time selected by the user is the second preset time, and the intercepting operation is started after the second preset time is pushed forward from the time point of the intercepting start instruction.
In the sixth embodiment, as shown in fig. 6, when the type of interception is interception, a method for generating a video segment includes the following steps:
step S600, playing the video file.
Step S610, capturing facial expression data of the user in real time, and comparing the captured facial expression data with typical expression element data in the facial feature database. Facial expression data of a user is captured by a video input tool such as a camera.
In step S620, it is determined whether the facial expression data is typical expression element data, if so, step S630 is executed, otherwise, step S640 is executed.
Step S630, an interception start instruction is generated, and the type of the interception is acquired as interception.
Step S640, determining whether the intercepting operation is being performed, if yes, performing step S650, otherwise, ending.
In step S650, an interception end instruction is generated.
And step S660, starting to intercept the video from the time point when the interception starting instruction is received and pushing forward for a third preset time until the interception ending instruction is received, stopping intercepting to obtain a video clip, and storing the video clip under a set path.
And (4) from the time point of receiving the interception starting instruction to the third preset time, and after the interception ending instruction is received, intercepting the video of the period of time and storing the video in the set path. In this embodiment, the preset time selected by the user is a third preset time, and the intercepting operation is started after the third preset time is pushed forward from the time point of the intercepting start instruction.
In one embodiment, as shown in FIG. 7, a video clip generation system includes a video playback module 700, a facial feature database 710, a facial expression capture module 720, a comparison module 730, and an intercept module 740. Wherein,
the video playing module 700 is used for playing video files. The format of the video file may be MOV, music CD, MID, MP3, MP4, RAM, RA, MPG, VCD, DAT, SVCD, or DVD, etc., but is not limited to these formats.
The facial feature database 710 is used to store typical emoticon data. The facial feature database 710 includes typical emoticon data (e.g., data of a smiling face or a crying face), head feature data (e.g., features such as eyes and mouth), facial feature data, and the like.
The facial expression capture module 720 is used to capture facial expression data of the user in real time. The facial expression capture module 720 is a video input device, such as a camera, that captures facial expression data of the user. The facial expression capturing module 720 determines the head according to the position of the head of the person, determines the head characteristics such as eyes and mouth, and confirms that the head is a face by comparing the head characteristics with the facial characteristics in the facial characteristic database 710, thereby completing capturing facial expression data. The comparison module 730 is configured to compare the captured facial expression data with typical expression element data in the facial feature database 710, and determine whether the facial expression data is typical expression element data. Taking a smiling face as an example, the comparison module 730 compares the degree of upward bending of the mouth and the degree of downward bending of the eyes of the captured facial expression data with smiling face data stored in the facial feature database 710, thereby determining whether or not to smile.
The capture module 740 is configured to capture the video according to the comparison result to obtain a video segment. If the comparison result indicates that the captured facial expression data is typical expression element data, the capture module 740 captures the video to obtain video segments; as a result of the comparison, if the captured facial expression data is not typical expression element data, the capture module 740 does not capture the video or stops capturing the video.
As shown in fig. 8, in another embodiment, the video clip generation system further includes an instruction generation module 750, a determination module 760 and an intercept switch 770, in addition to the video playing module 700, the facial feature database 710, the facial expression capture module 720, the comparison module 730 and the intercept module 740.
The determination module 760 determines whether to start the facial expression data automatic capturing function, if yes, the intercept switch 770 starts the facial expression data automatic capturing function, and the facial expression capturing module 730 captures the facial expression data of the user in real time, otherwise, the process is finished.
In addition, when the facial expression data automatic capturing function is started, the facial expression capturing module 730 may obtain a preset facial expression of the user to be captured, such as "laughing", "crying", "frightening", and the like.
The instruction module 750 is configured to generate an interception start instruction when the facial expression data obtained by the comparison module 730 is typical expression element data, and send the interception start instruction to the interception module 740; the instruction generating module 750 is further configured to generate an intercepting end instruction when the facial expression data is not the typical expression element data as compared by the comparing module 730, and send the intercepting end instruction to the intercepting module 740.
The capture module 740 is further configured to obtain a capture type, and capture the video according to the capture type, the capture start instruction, and the capture end instruction to obtain a video segment. The interception types include screenshots, continuous shots, animations, videos and the like, but are not limited to the screenshots, and the interception types are set by the user in advance.
In a preferred embodiment, when the type of capture is screenshot, the capture module 740 is configured to capture a picture and store the picture in the set path.
In a preferred embodiment, when the capture type is burst, the capture module 740 is configured to continuously capture a series of graphs at predetermined time intervals from the time point when the capture start command is received, by advancing by a first predetermined time. The capture module 740 continuously captures a series of graphs at predetermined time intervals from the time point when the capture start instruction is received to the first predetermined time, or captures a specified number of graphs at a time period from the time point when the capture start instruction is received to the time point when the capture end instruction is received, and forms the captured specified number of graphs into a whole continuous shooting set. Considering that the facial expression data is the same as the typical expression element data, the video file content at the time point of receiving the interception start instruction is closely connected with the video file content pushed forward for the first preset time, so that the video file content pushed forward for a certain time is captured together and stored under the set path.
In a preferred embodiment, when the capture type is animation, the capture module 740 is further configured to start capturing the video from the time point when the capture start instruction is received, forward by a second predetermined time, until the capture end instruction is received, stop capturing, generate animation, and store the animation in the set path. A GIF animation or other format animation is generated.
In a preferred embodiment, when the interception type is interception, the interception module 740 is configured to start to intercept the video from a time point when the interception start instruction is received to a third predetermined time, stop to intercept the video until the interception end instruction is received, obtain a video segment, and store the video segment in a set path.
The first predetermined time, the second predetermined time, and the third predetermined time all refer to preset times calculated from a time point of intercepting the start instruction, the preset times may be set by a user or a system, such as 20 seconds, 30 seconds, and the like, and the lengths of the first predetermined time, the second predetermined time, and the third predetermined time may be the same or different. When the facial expression data automatic capturing function is started, the first preset time, the second preset time or the third preset time selected by the user can be obtained, and the corresponding preset time is calculated forward according to the selection of the user for capturing.
According to the video clip generation method and system, the video file is played, meanwhile, facial expression data of the user are collected in real time, the collected facial expression data are compared with typical expression element data in the facial feature database, the video is intercepted according to the comparison result, the video clip is obtained, and the video clip corresponding to the typical expression element data is obtained, so that the required video clip can be intercepted accurately, and the playing effect is not influenced.
In addition, according to the interception starting instruction and the interception ending instruction, the required video clip is intercepted more accurately without influencing the playing effect; different interception types are adopted, and different requirements of users are met.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (12)
1. A video clip generation method, comprising the steps of:
playing the video file;
capturing facial expression data of a user in real time, and comparing the captured facial expression data with typical expression element data in a facial feature database;
and intercepting the video according to the comparison result to obtain a video clip.
2. The method of claim 1, wherein the step of capturing the video according to the comparison result to obtain the video segment comprises: judging whether the facial expression data are typical expression element data or not, if so, generating an interception starting instruction, acquiring an interception type, carrying out interception operation on the video according to the interception type and the interception starting instruction, if not, further judging whether the interception operation is carried out, if so, generating an interception ending instruction, stopping the interception operation on the video according to the interception ending instruction, intercepting the video according to the interception starting instruction and the interception ending instruction, obtaining a video segment, and if not, ending the video segment.
3. The method for generating a video clip according to claim 2, wherein the capture type is a screenshot, and capturing the video according to the capture type, the capture start instruction, and the capture end instruction specifically includes: and grabbing a picture and storing the picture under the set path.
4. The method for generating a video segment according to claim 2, wherein the capture type is continuous shooting, and capturing the video according to the capture type, the capture start instruction, and the capture end instruction specifically includes: and continuously grabbing a series of graphs at preset time intervals from the time point of receiving the interception starting instruction to the forward preset time.
5. The method of generating a video clip according to claim 2, wherein the type of interception is animation, and the intercepting the video according to the type of interception, the interception start instruction, and the interception end instruction specifically includes: and starting to intercept the video from the time point of receiving the interception starting instruction to the forward preset time, stopping intercepting until receiving an interception ending instruction, generating an animation, and storing the animation in a set path.
6. The method for generating a video segment according to claim 2, wherein the type of interception is interception, and the intercepting the video according to the type of interception, the interception start instruction, and the interception end instruction specifically includes: and starting to intercept the video from the time point of receiving the interception starting instruction to the forward preset time, stopping intercepting until receiving an interception ending instruction, obtaining a video clip, and storing the video clip under a set path.
7. A video clip generation system, comprising:
the video playing module is used for playing the video file;
a facial feature database for storing typical expression element data;
the facial expression capturing module is used for capturing facial expression data of the user in real time;
a comparison module for comparing the captured facial expression data with typical expression element data in the facial feature database;
and the intercepting module is used for intercepting the video according to the comparison result to obtain a video clip.
8. The video segment generation system of claim 7, further comprising an instruction generation module, where the instruction generation module is configured to generate an interception start instruction and send the interception start instruction to the interception module when the comparison module compares that the facial expression data is typical expression element data, and is further configured to generate an interception end instruction and send the interception end instruction to the interception module when the comparison module compares that the facial expression data is not typical expression element data;
the intercepting module is further used for acquiring an intercepting type, and intercepting the video according to the intercepting type, the intercepting starting instruction and the intercepting ending instruction to obtain a video segment.
9. The video clip generation system of claim 8, wherein the type of capture is a screenshot, and the capture module is further configured to capture a picture and save the picture to a set path.
10. The video clip generation system according to claim 8, wherein the capture type is continuous shooting, and the capture module is further configured to capture a series of graphs at predetermined time intervals in succession from a time point when the capture start instruction is received, by a predetermined time onward.
11. The video clip generation system of claim 8, wherein the capture type is animation, and the capture module is further configured to start capturing the video from a time point when the capture start command is received and a predetermined time is advanced until the capture end command is received, stop capturing the video, generate animation, and store the animation in the set path.
12. The system according to claim 8, wherein the type of interception is interception, and the interception module is further configured to start intercepting the video from a time point when the interception start instruction is received, to stop intercepting the video until the interception end instruction is received, to obtain the video segment, and to store the video segment in the set path.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100722771A CN102693739A (en) | 2011-03-24 | 2011-03-24 | Method and system for video clip generation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011100722771A CN102693739A (en) | 2011-03-24 | 2011-03-24 | Method and system for video clip generation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102693739A true CN102693739A (en) | 2012-09-26 |
Family
ID=46859119
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011100722771A Pending CN102693739A (en) | 2011-03-24 | 2011-03-24 | Method and system for video clip generation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102693739A (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102905102A (en) * | 2012-09-27 | 2013-01-30 | 安科智慧城市技术(中国)有限公司 | Screen capturing video player and screen capturing method |
CN103826160A (en) * | 2014-01-09 | 2014-05-28 | 广州三星通信技术研究有限公司 | Method and device for obtaining video information, and method and device for playing video |
CN103873955A (en) * | 2012-12-18 | 2014-06-18 | 联想(北京)有限公司 | Video acquisition method and device |
CN104049758A (en) * | 2014-06-24 | 2014-09-17 | 联想(北京)有限公司 | Information processing method and electronic device |
CN104349099A (en) * | 2013-07-25 | 2015-02-11 | 联想(北京)有限公司 | Image storage method and device |
CN104951479A (en) * | 2014-03-31 | 2015-09-30 | 小米科技有限责任公司 | Video content detecting method and device |
CN105979140A (en) * | 2016-06-03 | 2016-09-28 | 北京奇虎科技有限公司 | Image generation device and image generation method |
CN106791535A (en) * | 2016-11-28 | 2017-05-31 | 合网络技术(北京)有限公司 | Video recording method and device |
CN106878809A (en) * | 2017-02-15 | 2017-06-20 | 腾讯科技(深圳)有限公司 | A kind of video collection method, player method, device, terminal and system |
CN107277393A (en) * | 2017-07-11 | 2017-10-20 | 河海大学常州校区 | A kind of method for generating sportsman by the video of terminal |
CN107370887A (en) * | 2017-08-30 | 2017-11-21 | 维沃移动通信有限公司 | A kind of expression generation method and mobile terminal |
CN107948732A (en) * | 2017-12-04 | 2018-04-20 | 京东方科技集团股份有限公司 | Playback method, video play device and the system of video |
CN107968961A (en) * | 2017-12-05 | 2018-04-27 | 吕庆祥 | Method and device based on feeling curve editing video |
CN109076263A (en) * | 2017-12-29 | 2018-12-21 | 深圳市大疆创新科技有限公司 | Video data handling procedure, equipment, system and storage medium |
CN109963180A (en) * | 2017-12-25 | 2019-07-02 | 上海全土豆文化传播有限公司 | Video information statistical method and device |
CN109982109A (en) * | 2019-04-03 | 2019-07-05 | 睿魔智能科技(深圳)有限公司 | The generation method and device of short-sighted frequency, server and storage medium |
WO2020143156A1 (en) * | 2019-01-11 | 2020-07-16 | 平安科技(深圳)有限公司 | Hotspot video annotation processing method and apparatus, computer device and storage medium |
CN112235637A (en) * | 2020-10-15 | 2021-01-15 | 惠州Tcl移动通信有限公司 | GIF generation method, device, storage medium and mobile terminal |
CN112258778A (en) * | 2020-10-12 | 2021-01-22 | 南京云思创智信息科技有限公司 | Micro-expression real-time alarm video recording method |
CN112380954A (en) * | 2020-11-10 | 2021-02-19 | 四川长虹电器股份有限公司 | Video classification intercepting system and method based on image recognition |
CN113965804A (en) * | 2021-09-28 | 2022-01-21 | 展讯半导体(南京)有限公司 | Video processing method and related product |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0778547A1 (en) * | 1995-05-11 | 1997-06-11 | Sega Enterprises, Ltd. | Image processing apparatus and image processing method |
CN1942970A (en) * | 2004-04-15 | 2007-04-04 | 皇家飞利浦电子股份有限公司 | Method of generating a content item having a specific emotional influence on a user |
CN101420579A (en) * | 2007-10-22 | 2009-04-29 | 皇家飞利浦电子股份有限公司 | Method, apparatus and system for detecting exciting part |
-
2011
- 2011-03-24 CN CN2011100722771A patent/CN102693739A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0778547A1 (en) * | 1995-05-11 | 1997-06-11 | Sega Enterprises, Ltd. | Image processing apparatus and image processing method |
CN1942970A (en) * | 2004-04-15 | 2007-04-04 | 皇家飞利浦电子股份有限公司 | Method of generating a content item having a specific emotional influence on a user |
CN101420579A (en) * | 2007-10-22 | 2009-04-29 | 皇家飞利浦电子股份有限公司 | Method, apparatus and system for detecting exciting part |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102905102A (en) * | 2012-09-27 | 2013-01-30 | 安科智慧城市技术(中国)有限公司 | Screen capturing video player and screen capturing method |
CN103873955A (en) * | 2012-12-18 | 2014-06-18 | 联想(北京)有限公司 | Video acquisition method and device |
CN104349099A (en) * | 2013-07-25 | 2015-02-11 | 联想(北京)有限公司 | Image storage method and device |
CN104349099B (en) * | 2013-07-25 | 2018-04-27 | 联想(北京)有限公司 | The method and apparatus for storing image |
CN103826160A (en) * | 2014-01-09 | 2014-05-28 | 广州三星通信技术研究有限公司 | Method and device for obtaining video information, and method and device for playing video |
CN104951479A (en) * | 2014-03-31 | 2015-09-30 | 小米科技有限责任公司 | Video content detecting method and device |
CN104049758A (en) * | 2014-06-24 | 2014-09-17 | 联想(北京)有限公司 | Information processing method and electronic device |
CN104049758B (en) * | 2014-06-24 | 2017-12-29 | 联想(北京)有限公司 | The method and electronic equipment of a kind of information processing |
CN105979140A (en) * | 2016-06-03 | 2016-09-28 | 北京奇虎科技有限公司 | Image generation device and image generation method |
CN106791535A (en) * | 2016-11-28 | 2017-05-31 | 合网络技术(北京)有限公司 | Video recording method and device |
CN106878809A (en) * | 2017-02-15 | 2017-06-20 | 腾讯科技(深圳)有限公司 | A kind of video collection method, player method, device, terminal and system |
CN106878809B (en) * | 2017-02-15 | 2019-06-28 | 腾讯科技(深圳)有限公司 | A kind of video collection method, playback method, device, terminal and system |
CN107277393A (en) * | 2017-07-11 | 2017-10-20 | 河海大学常州校区 | A kind of method for generating sportsman by the video of terminal |
CN107370887B (en) * | 2017-08-30 | 2020-03-10 | 维沃移动通信有限公司 | Expression generation method and mobile terminal |
CN107370887A (en) * | 2017-08-30 | 2017-11-21 | 维沃移动通信有限公司 | A kind of expression generation method and mobile terminal |
CN107948732A (en) * | 2017-12-04 | 2018-04-20 | 京东方科技集团股份有限公司 | Playback method, video play device and the system of video |
US10560749B2 (en) | 2017-12-04 | 2020-02-11 | Boe Technology Group Co., Ltd. | Video playing method, video playing device, video playing system, apparatus and computer-readable storage medium |
CN107968961A (en) * | 2017-12-05 | 2018-04-27 | 吕庆祥 | Method and device based on feeling curve editing video |
CN109963180A (en) * | 2017-12-25 | 2019-07-02 | 上海全土豆文化传播有限公司 | Video information statistical method and device |
CN109076263A (en) * | 2017-12-29 | 2018-12-21 | 深圳市大疆创新科技有限公司 | Video data handling procedure, equipment, system and storage medium |
WO2019127332A1 (en) * | 2017-12-29 | 2019-07-04 | 深圳市大疆创新科技有限公司 | Video data processing method, device and system, and storage medium |
US11270736B2 (en) | 2017-12-29 | 2022-03-08 | SZ DJI Technology Co., Ltd. | Video data processing method, device, system, and storage medium |
WO2020143156A1 (en) * | 2019-01-11 | 2020-07-16 | 平安科技(深圳)有限公司 | Hotspot video annotation processing method and apparatus, computer device and storage medium |
CN109982109B (en) * | 2019-04-03 | 2021-08-03 | 睿魔智能科技(深圳)有限公司 | Short video generation method and device, server and storage medium |
CN109982109A (en) * | 2019-04-03 | 2019-07-05 | 睿魔智能科技(深圳)有限公司 | The generation method and device of short-sighted frequency, server and storage medium |
CN112258778A (en) * | 2020-10-12 | 2021-01-22 | 南京云思创智信息科技有限公司 | Micro-expression real-time alarm video recording method |
CN112258778B (en) * | 2020-10-12 | 2022-09-06 | 南京云思创智信息科技有限公司 | Micro-expression real-time alarm video recording method |
CN112235637A (en) * | 2020-10-15 | 2021-01-15 | 惠州Tcl移动通信有限公司 | GIF generation method, device, storage medium and mobile terminal |
CN112380954A (en) * | 2020-11-10 | 2021-02-19 | 四川长虹电器股份有限公司 | Video classification intercepting system and method based on image recognition |
CN113965804A (en) * | 2021-09-28 | 2022-01-21 | 展讯半导体(南京)有限公司 | Video processing method and related product |
CN113965804B (en) * | 2021-09-28 | 2025-03-21 | 展讯半导体(南京)有限公司 | Video processing method and related products |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102693739A (en) | Method and system for video clip generation | |
US9269399B2 (en) | Capture, syncing and playback of audio data and image data | |
US20070113182A1 (en) | Replay of media stream from a prior change location | |
JP5135024B2 (en) | Apparatus, method, and program for notifying content scene appearance | |
CN106998494B (en) | Video recording method and related device | |
EP3105928A1 (en) | Delivering modified content meeting time constraints | |
TW200402654A (en) | A system and method for providing user control over repeating objects embedded in a stream | |
CN112118395B (en) | Video processing method, terminal and computer readable storage medium | |
WO2004021221A2 (en) | System and method for indexing a video sequence | |
CN108174133B (en) | Court trial video display method and device, electronic equipment and storage medium | |
JP2012074773A (en) | Editing device, control method, and program | |
US11503375B2 (en) | Systems and methods for displaying subjects of a video portion of content | |
WO2013045123A1 (en) | Personalised augmented a/v stream creation | |
CN104918101B (en) | A kind of method, playback terminal and the system of automatic recording program | |
CN108153882A (en) | A kind of data processing method and device | |
CN108090424A (en) | Online teaching investigation method and equipment | |
JP2009111938A (en) | Device, method and program for editing information, and record medium recorded with the program thereon | |
JP2015061194A (en) | Information processing unit, information processing method, and program | |
CN105893816A (en) | Video playing control method and apparatus | |
US20250014341A1 (en) | Systems and methods for displaying subjects of an audio portion of content and displaying autocomplete suggestions for a search related to a subject of the audio portion | |
CN101835018A (en) | Video recording and playback device | |
WO2017026387A1 (en) | Video-processing device, video-processing method, and recording medium | |
KR20160016746A (en) | Determining start and end points of a video clip based on a single click | |
JP2008017235A (en) | Apparatus and method for adding information of degree of importance based on video operation history | |
JP2009239322A (en) | Video reproducing device, video reproducing method, and video reproducing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20120926 |
|
RJ01 | Rejection of invention patent application after publication |