CN113761275A - Video preview moving picture generation method, device and equipment and readable storage medium - Google Patents
Video preview moving picture generation method, device and equipment and readable storage medium Download PDFInfo
- Publication number
- CN113761275A CN113761275A CN202011295278.8A CN202011295278A CN113761275A CN 113761275 A CN113761275 A CN 113761275A CN 202011295278 A CN202011295278 A CN 202011295278A CN 113761275 A CN113761275 A CN 113761275A
- Authority
- CN
- China
- Prior art keywords
- image frame
- image frames
- target image
- frames
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012216 screening Methods 0.000 claims abstract description 21
- 238000012163 sequencing technique Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 abstract description 8
- 238000004519 manufacturing process Methods 0.000 abstract description 5
- 238000012545 processing Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The embodiment of the disclosure provides a video preview moving picture generation method, a device, equipment and a readable storage medium. The method comprises the following steps: acquiring all image frames to be selected corresponding to a video to be processed; screening all the image frames to be selected to obtain a plurality of target image frames, wherein the similarity of any two target image frames is lower than a preset threshold value; and generating a preview moving picture according to the plurality of target image frames. Because the similarity between any two target image frames is lower than the preset threshold value, the content of the video to be processed can be more accurately expressed according to the preview moving picture produced by the target image frames. The technical problem that the motion picture manufactured by the existing motion picture manufacturing method can not capture the change in the video is solved. Further, the quality of the generated preview moving picture can be improved, and the user can know the video content more intuitively.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method, a device and equipment for generating a video preview moving picture and a readable storage medium.
Background
Video content is more and more widely used, and many times when facing many video contents, in order to enable a user to quickly know the video contents, a preview moving picture can be generated according to the video contents, and the preview moving picture can be played. Therefore, the user can quickly know the video content according to the preview image.
In order to create a preview moving picture, in the prior art, a plurality of image frames of a video file are generally extracted, and a preview moving picture is generated from the plurality of image frames.
In the course of implementing the present disclosure, the inventors found that at least the following problems exist in the prior art: in the process of creating the preview moving picture by using the method, the quality of the preview moving picture may be poor because the preview moving picture is directly created from the extracted plurality of image frames. For example, for a video content with a part of identical content, the preview moving picture produced by the method cannot sense the change of the video content, and the obtained preview moving picture has a large number of identical image frames.
Disclosure of Invention
The embodiment of the disclosure provides a video preview moving picture generation method, a device, equipment and a readable storage medium, which are used for solving the technical problems that a large number of identical image frames exist in a preview moving picture generated by the existing preview moving picture generation method and the quality of the moving picture is poor.
In a first aspect, an embodiment of the present disclosure provides a method for generating a video preview motion picture, including:
acquiring all image frames to be selected corresponding to a video to be processed;
screening all the image frames to be selected to obtain a plurality of target image frames, wherein the similarity of any two target image frames is lower than a preset threshold value;
and generating a preview moving picture according to the plurality of target image frames.
In a second aspect, an embodiment of the present disclosure provides a video preview motion picture generating apparatus, including:
the acquisition module is used for acquiring all image frames to be selected corresponding to the video to be processed;
the screening module is used for screening all the image frames to be selected to obtain a plurality of target image frames, wherein the similarity of any two target image frames is lower than a preset threshold value;
and the generating module is used for generating a preview image according to the plurality of target image frames.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to call program instructions in the memory to execute the video preview motion picture generating method according to the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the method for generating video preview images according to the first aspect is implemented.
According to the method, the device and the equipment for generating the video preview moving picture and the readable storage medium provided by the embodiment of the disclosure, after all the image frames to be selected corresponding to the video to be processed are obtained, the image frames to be selected are firstly screened to obtain a plurality of target image frames. Because the similarity between any two target image frames is lower than the preset threshold value, the content of the video to be processed can be more accurately expressed according to the preview moving picture produced by the target image frames. The technical problem that the motion picture manufactured by the existing motion picture manufacturing method can not capture the change in the video is solved. Further, the quality of the generated preview moving picture can be improved, and the user can know the video content more intuitively.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a schematic diagram of an application scenario upon which the present disclosure is based;
fig. 2 is a schematic flowchart of a method for generating a video preview motion picture according to a first embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a method for generating a video preview motion picture according to a second embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a method for generating a video preview motion picture according to a third embodiment of the present disclosure;
fig. 5 is a schematic flow chart of a method for generating a video preview motion picture according to a fourth embodiment of the present disclosure;
fig. 6 is a schematic diagram of target image frame selection provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating a selection of another target image frame according to an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of a video preview motion picture generating apparatus according to a fifth embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device provided for a sixth embodiment of the present disclosure.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In order to solve the technical problems that a large number of identical image frames exist in a preview moving picture generated by the above-mentioned existing preview moving picture generating method and the quality of the moving picture is poor, the present disclosure provides a video preview moving picture generating method, device, equipment and readable storage medium.
It should be noted that the present disclosure provides a method, an apparatus, a device and a readable storage medium for generating a video preview moving picture, which can be applied in various scenes for generating a preview moving picture according to a video.
In order to enable a user to know the video content more intuitively, a preview moving picture can be generated according to the video content, so that the user does not need to click on the video for viewing, and the user can directly view the preview moving picture. The existing moving picture generation method generally performs frame extraction operation on a video randomly, and generates a preview moving picture according to an extracted image frame. However, the moving picture processing using the above method may result in poor moving picture quality. When a large amount of identical contents exist in a video, the moving picture generated by the method often has more identical image frames. Or when the content in the video jumps faster, the moving picture generated by the method may not accurately capture enough jumped image frames, so that the preview moving picture cannot accurately express the content of the video.
In order to improve the quality of the generated preview moving picture and avoid the existence of too many useless image frames in the moving picture, the inventors have found through research that the image frames corresponding to the video can be subjected to a screening operation in advance so that the same image frames are not included in the screened image frames, and the preview moving picture can be created from the screened image frames.
The inventor further researches and discovers that after all image frames to be selected corresponding to the video to be processed are obtained, the image frames to be selected can be screened firstly to obtain a plurality of target image frames. Because the similarity between any two target image frames is lower than the preset threshold value, the content of the video to be processed can be more accurately expressed according to the preview moving picture produced by the target image frames.
Fig. 1 is a schematic diagram of an application scenario based on the present disclosure, and as shown in fig. 1, the application scenario of the present disclosure at least includes a memory 1, a processor 2, and a display screen 3. Specifically, the processor 2 may obtain a video to be processed from the memory 1, and determine all image frames to be selected according to the video to be processed. And screening all the image frames to be selected to obtain a plurality of target image frames, wherein the similarity of any two target image frames is lower than a preset threshold value. A preview moving picture is generated from the plurality of target image frames. The preview icon is transmitted to the display screen 3 and displayed.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of a method for generating a video preview motion picture according to a first embodiment of the present disclosure, as shown in fig. 2, the method includes:
The execution subject of the embodiment is a video preview motion picture generating apparatus, which may be coupled to a server or a terminal device. When the preview animation generation device is coupled to the server, the preview animation generation operation can be performed with the terminal device of the user according to the animation generation instruction sent by the terminal device. When the terminal equipment is coupled with the terminal equipment, a preview motion picture can be generated according to the triggering operation of a user or automatically on the video stored in the terminal equipment.
In this embodiment, in order to generate a preview animation corresponding to a video to be processed, a plurality of image frames to be selected corresponding to the video to be processed need to be acquired first. The multiple image frames to be selected may be all image frames corresponding to the video to be processed, or image frames obtained after all image frames are primarily screened.
When the video preview motion picture generating device is coupled to the server, the video preview motion picture generating device may acquire a video to be processed from a preset database, and further acquire all image frames to be selected corresponding to the video to be processed. When the video preview motion picture generating device is coupled to the terminal device, the video to be processed may be obtained from a preset memory, and further, all the image frames to be selected corresponding to the video to be processed may be obtained.
And 102, screening all the image frames to be selected to obtain a plurality of target image frames, wherein the similarity of any two target image frames is lower than a preset threshold value.
In the present embodiment, in order to improve the quality of the generated preview moving picture and avoid the existence of too many useless image frames in the moving picture, all the candidate image frames corresponding to the video to be processed may be subjected to a filtering operation to obtain a plurality of target image frames.
It should be noted that the similarity between any two target image frames is lower than a preset threshold.
And 103, generating a preview moving picture according to the plurality of target image frames.
In the present embodiment, a preview moving picture can be generated from the plurality of target image frames. Specifically, any moving picture generation method may be adopted to generate the preview moving picture, which is not limited in this disclosure.
In the method for generating a video preview moving picture provided by this embodiment, after all the image frames to be selected corresponding to the video to be processed are obtained, the image frames to be selected are first subjected to a screening operation, so as to obtain a plurality of target image frames. Because the similarity between any two target image frames is lower than the preset threshold value, the content of the video to be processed can be more accurately expressed according to the preview moving picture produced by the target image frames. The technical problem that the motion picture manufactured by the existing motion picture manufacturing method can not capture the change in the video is solved. Further, the quality of the generated preview moving picture can be improved, and the user can know the video content more intuitively.
Fig. 3 is a schematic flow chart of a method for generating a video preview motion picture according to a second embodiment of the present disclosure, where on the basis of the first embodiment, step 101 specifically includes:
step 201, extracting all image frames corresponding to the video to be processed, and arranging the image frames according to the time sequence of the image frames in the video to be processed.
And step 203, determining weight information corresponding to each image frame according to the identification result.
And 204, taking a plurality of image frames with weight information not being zero as the image frames to be selected.
In this embodiment, the all image frames may be image frames obtained after performing preliminary screening on all image frames corresponding to the video to be processed. Specifically, all image frames corresponding to the video to be processed may be extracted, and the image frames may be sorted according to the time sequence of the image frames in the video to be processed, so as to ensure that the content of the video to be processed can be accurately expressed by a preview motion picture generated subsequently according to the image frame to be selected.
For each image frame, a plurality of preset identification rules can be sequentially adopted to perform identification operation on the image frame, so as to obtain an identification result. The identification rule is specifically used to identify whether a valid identification target is included in the image frame. Each recognition rule is associated with a different weight, so that after the recognition result is obtained, the weight information corresponding to each image frame can be determined according to the recognition result.
In order to enable the subsequently generated preview images to include effective identification targets, after weight information corresponding to each image frame is determined, a plurality of non-zero image frames can be acquired from all the image frames as candidate image frames.
Further, on the basis of the first embodiment, the step 202 specifically includes:
sequentially adopting a plurality of preset recognition models to perform recognition operation on the image frame so as to determine whether the image frame comprises a preset recognition object or not and obtain a recognition result;
correspondingly, step 203 specifically includes:
and for each recognition model, if the recognition result is that the image frame comprises a recognition object corresponding to the recognition model, updating the weight of the image frame according to the weight corresponding to the recognition model.
In this embodiment, the preset recognition rule may be a plurality of preset recognition models. The identification model includes, but is not limited to, a character identification model, a building identification model, an animal identification model, and the like.
Correspondingly, after a plurality of image frames are acquired, a plurality of recognition models can be sequentially adopted to perform recognition operation on the image frames, whether preset recognition objects are included in the image frames is determined, and a recognition result is obtained.
Since each recognition rule has a preset weight value, after the recognition result is obtained, the weight information corresponding to each image frame can be determined according to the recognition result. Specifically, for each recognition model, if the recognition result is that the image frame includes a recognition object corresponding to the recognition model, the weight of the image frame is updated according to the weight corresponding to the recognition model. The updating operation may specifically be to add the weight corresponding to the recognition model to the weight corresponding to the image frame.
For example, in practical applications, the image frame corresponds to a weight of 2, the human recognition model corresponds to a weight of 5, the building recognition model corresponds to a weight of 1, and the animal recognition model corresponds to a weight of 3. The recognition result indicates that the image frame includes a person and a building. The weight of the image frame may be updated according to the weight of 5 corresponding to the person identification model and the weight of 1 corresponding to the building identification model. That is, the image frame corresponds to a weight of 2+5+ 1-8.
In the method for generating the video preview moving picture provided by this embodiment, before generating the preview moving picture, the preset identification rule is first adopted to perform the identification operation on the image frame corresponding to the video to be processed, so that it can be ensured that the image frame to be selected for generating the preview moving picture contains an effective identification target, and further it can be ensured that the generated preview moving picture can more effectively express the content of the video to be processed.
Fig. 4 is a schematic flow chart of a method for generating a video preview motion picture according to a third embodiment of the present disclosure, where on the basis of any of the foregoing embodiments, as shown in fig. 4, step 102 specifically includes:
and if not, obtaining a plurality of target image frames.
In this embodiment, in order to improve the quality of the generated preview moving picture and avoid the existence of too many useless image frames in the moving picture, all the image frames to be selected corresponding to the video to be processed may be subjected to a filtering operation to obtain a plurality of target image frames.
Specifically, a first image frame to be selected in all image frames to be selected may be acquired as a current image frame according to a time sequence of the image frames to be selected in the video to be processed, and a similarity between the current image frame and each image frame to be selected subsequent to the current image frame may be sequentially calculated. If the similarity between a certain image frame to be selected and the current image frame is detected to exceed a preset threshold value, the image frame to be selected can be represented as an image frame which has the same probability as the current image frame. The preset threshold may be 90%, or may be a numerical value set by the user according to actual needs, which is not limited by the present disclosure. At this time, in order to improve the quality of the preview moving picture, the weight of the candidate image frame may be set to 0, so that the corresponding data processing operation may not be performed on the image frame subsequently. The calculation amount of a video preview moving picture generating device is reduced on the basis of improving the quality of the preview moving picture. The current image frame is taken as a target image frame.
After the current image frame is screened, whether other image frames to be selected with weights not being 0 are included after the current image frame can be judged, if the image frames to be selected exist, the image frames to be selected can be used as the current image frame, the step of sequentially calculating the similarity between the current image frame and each image frame to be selected after the current image frame is returned to be executed until the image frames to be selected are updated, the image frames to be selected are not included after the current image frame, and a plurality of target image frames are obtained. Otherwise, if the image frames to be selected which comprise other image frames to be selected with the weight not being 0 do not exist, the screening operation of the image frames to be selected is completed, and a plurality of target image frames are obtained.
In the method for generating a video preview moving picture provided by this embodiment, a plurality of target image frames are obtained by performing a screening operation on all the to-be-selected image frames corresponding to the to-be-processed video, and because the similarity between any two target image frames is lower than a preset threshold, the content of the to-be-processed video can be more accurately expressed according to the preview moving picture generated from the plurality of target image frames. The technical problem that the motion picture manufactured by the existing motion picture manufacturing method can not capture the change in the video is solved.
Fig. 5 is a schematic flow chart of a method for generating a video preview motion picture according to a fourth embodiment of the present disclosure, where on the basis of any of the foregoing embodiments, as shown in fig. 5, step 103 specifically includes:
step 401, obtaining a preset number of target image frames from the multiple target image frames.
And 402, generating a preview moving picture according to the preset number of target image frames.
In this embodiment, the user can customize the number of image frames in the preview image according to the personalized requirements. Specifically, a preset number of target image frames may be acquired in a plurality of target image frames. And further, a preview moving picture can be generated according to a preset number of target image frames.
Further, on the basis of any of the above embodiments, step 401 specifically includes:
and determining the target image frame with the largest weight in the plurality of target image frames.
And judging whether the number of the target image frame with the maximum weight and all image frames behind the target image frame with the maximum weight meet the preset number or not.
And if so, acquiring a preset number of target image frames backwards from the target image frame with the maximum weight.
If not, acquiring all target image frames behind the target image frame with the maximum weight, determining the difference value between the number of all target image frames behind the target image frame with the maximum weight and the preset number, acquiring part of target image frames before the target image frame with the maximum weight according to the difference value, and acquiring the preset number of target image frames.
Fig. 6 is a schematic diagram of selecting a target image frame provided by an embodiment of the present disclosure, and fig. 7 is a schematic diagram of selecting another target image frame provided by an embodiment of the present disclosure, as shown in fig. 6, a target image frame with the highest weight among a plurality of target image frames may be determined first. And determining whether the number of the target image frame with the maximum weight and all image frames behind the target image frame with the maximum weight meet a preset number or not by taking the target image frame with the maximum weight as a starting point. When satisfied, a preset number of target image frames may be acquired backward from the target image frame having the largest weight. When the number of the target image frames does not meet the preset number, as shown in fig. 7, all the target image frames behind the target image frame with the maximum weight are obtained, the difference value between the number of all the target image frames behind the target image frame with the maximum weight and the preset number is determined, and according to the difference value, a part of the target image frames in front of the target image frame with the maximum weight is obtained, so that the preset number of the target image frames is obtained.
Further, on the basis of any of the above embodiments, step 402 specifically includes:
and reversely sequencing the preset number of target image frames to obtain the reversely sequenced preset number of target image frames.
And adding the preset number of target image frames and the preset number of target image frames subjected to reverse sequencing to obtain an image frame list.
And generating a preview moving picture according to the image frames in the image frame list.
In this embodiment, after the preset number of target image frames are acquired, the preset number of target image frames may be subjected to reverse sorting, so as to obtain the preset number of target image frames subjected to reverse sorting.
And adding the preset number of target image frames and the preset number of target image frames subjected to reverse sequencing to obtain an image frame list. So that a preview motion picture can be subsequently generated from the image frames in the image frame list.
In the method for generating a video preview moving picture provided in this embodiment, a preset number of target image frames and a preset number of target image frames after reverse sorting are added to obtain an image frame list, and a preview moving picture is generated according to the image frames in the image frame list, so that the subsequently generated preview moving picture has an effect of moving forward and then returning to an initial state for replaying, and the quality of the generated preview moving picture is further improved.
Fig. 8 is a schematic structural diagram of a video preview motion picture generating apparatus according to a fifth embodiment of the present disclosure, and as shown in fig. 8, the apparatus includes: an acquisition module 51, a screening module 52 and a generation module 53. The obtaining module 51 is configured to obtain all image frames to be selected corresponding to the video to be processed. And the screening module 52 is configured to perform a screening operation on all the image frames to be selected to obtain a plurality of target image frames, where a similarity between any two target image frames is lower than a preset threshold. And a generating module 53, configured to generate a preview moving picture according to the multiple target image frames.
In the video preview moving picture generating apparatus provided in this embodiment, after all the image frames to be selected corresponding to the video to be processed are obtained, the image frames to be selected are first subjected to a screening operation, so as to obtain a plurality of target image frames. Because the similarity between any two target image frames is lower than the preset threshold value, the content of the video to be processed can be more accurately expressed according to the preview moving picture produced by the target image frames. The technical problem that the motion picture manufactured by the existing motion picture manufacturing method can not capture the change in the video is solved. Further, the quality of the generated preview moving picture can be improved, and the user can know the video content more intuitively.
Further, on the basis of the fifth embodiment, the obtaining module 51 is configured to: and extracting all image frames corresponding to the video to be processed, and arranging the image frames according to the time sequence of the image frames in the video to be processed. And aiming at each image frame, sequentially adopting a plurality of preset identification rules to carry out identification operation on the image frame to obtain an identification result. And determining the weight information corresponding to each image frame according to the identification result. And taking a plurality of image frames with weight information not being zero as the image frames to be selected.
Further, on the basis of the fifth embodiment, the obtaining module 51 is configured to: and sequentially adopting a plurality of preset recognition models to perform recognition operation on the image frame so as to determine whether the image frame comprises a preset recognition object or not and obtain a recognition result. The determining the weight information corresponding to each image frame according to the identification result comprises the following steps: and for each recognition model, if the recognition result is that the image frame comprises a recognition object corresponding to the recognition model, updating the weight of the image frame according to the weight corresponding to the recognition model.
Further, on the basis of any of the above embodiments, the screening module 52 is configured to: and acquiring a first image frame to be selected from all the image frames to be selected as a current image frame according to the time sequence of the image frames to be selected in the video to be processed, and sequentially calculating the similarity between the current image frame and each image frame to be selected behind the current image frame. If the similarity between the current image frame and any image frame to be selected is larger than a preset threshold value, updating the weight of the image frame to be selected, the similarity between the current image frame and any image frame to be selected is larger than the preset threshold value, to zero, and taking the current image frame as the target image frame. And judging whether the image frame to be selected exists behind the current image frame in the image frame to be selected with the current weight not equal to zero. If the current image frame exists, the image frame to be selected after the current image frame is used as the current image frame, the step of sequentially calculating the similarity between the current image frame and each image frame to be selected after the current image frame is executed, until the updated image frame to be selected does not include the image frame to be selected after the current image frame, and the plurality of target image frames are obtained.
Further, on the basis of any of the above embodiments, the generating module 53 is configured to: and acquiring a preset number of target image frames from the plurality of target image frames. And generating a preview moving picture according to the preset number of target image frames.
Further, on the basis of any of the above embodiments, the generating module 53 is configured to: and determining the target image frame with the largest weight in the plurality of target image frames. And judging whether the number of the target image frame with the maximum weight and all image frames behind the target image frame with the maximum weight meet the preset number or not. And if so, acquiring a preset number of target image frames backwards from the target image frame with the maximum weight. If not, acquiring all target image frames behind the target image frame with the maximum weight, determining the difference value between the number of all target image frames behind the target image frame with the maximum weight and the preset number, acquiring part of target image frames before the target image frame with the maximum weight according to the difference value, and acquiring the preset number of target image frames.
Further, on the basis of any of the above embodiments, the generating module 53 is configured to: and reversely sequencing the preset number of target image frames to obtain the reversely sequenced preset number of target image frames. And adding the preset number of target image frames and the preset number of target image frames subjected to reverse sequencing to obtain an image frame list. And generating a preview moving picture according to the image frames in the image frame list.
Fig. 9 is a schematic structural diagram of an electronic device according to a sixth embodiment of the present disclosure, as shown in fig. 9, the electronic device may be a mobile phone, a computer, a messaging device, a game console, a tablet device, a medical device, a personal digital assistant, or the like.
Apparatus 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the apparatus 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 608 includes a screen that provides an output interface between the device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 600 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, audio component 610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the apparatus 600. For example, the sensor component 614 may detect an open/closed state of the device 600, the relative positioning of components, such as a display and keypad of the device 600, the sensor component 614 may also detect a change in position of the device 600 or a component of the device 600, the presence or absence of user contact with the device 600, orientation or acceleration/deceleration of the device 600, and a change in temperature of the device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the apparatus 600 and other devices in a wired or wireless manner. The apparatus 600 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the apparatus 600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of a terminal device, enable the terminal device to perform the method for generating a sub video preview moving picture of the terminal device.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (16)
1. A method for generating a video preview moving picture, comprising:
acquiring all image frames to be selected corresponding to a video to be processed;
screening all the image frames to be selected to obtain a plurality of target image frames, wherein the similarity of any two target image frames is lower than a preset threshold value;
and generating a preview moving picture according to the plurality of target image frames.
2. The method according to claim 1, wherein the obtaining all the candidate image frames corresponding to the video to be processed comprises:
extracting all image frames corresponding to the video to be processed, and arranging the image frames according to the time sequence of the image frames in the video to be processed;
for each image frame, sequentially adopting a plurality of preset identification rules to carry out identification operation on the image frame to obtain an identification result;
determining weight information corresponding to each image frame according to the identification result;
and taking a plurality of image frames with weight information not being zero as the image frames to be selected.
3. The method according to claim 2, wherein the sequentially performing recognition operations on the image frames by using a plurality of preset recognition rules to obtain recognition results comprises:
sequentially adopting a plurality of preset recognition models to perform recognition operation on the image frame so as to determine whether the image frame comprises a preset recognition object or not and obtain a recognition result;
the determining the weight information corresponding to each image frame according to the identification result comprises the following steps:
and for each recognition model, if the recognition result is that the image frame comprises a recognition object corresponding to the recognition model, updating the weight of the image frame according to the weight corresponding to the recognition model.
4. The method according to any one of claims 1 to 3, wherein the performing a screening operation on all the candidate image frames to obtain a plurality of target image frames comprises:
acquiring a first image frame to be selected from all image frames to be selected as a current image frame according to the time sequence of the image frames to be selected in a video to be processed, and sequentially calculating the similarity between the current image frame and each image frame to be selected behind the current image frame;
if the similarity between the current image frame and any image frame to be selected is greater than a preset threshold value, updating the weight of the image frame to be selected, the similarity between the current image frame and any image frame to be selected is greater than the preset threshold value, to zero, and taking the current image frame as the target image frame;
judging whether the image frame to be selected exists behind the current image frame in the image frame to be selected with the current weight not equal to zero;
if the current image frame exists, the image frame to be selected after the current image frame is used as the current image frame, the step of sequentially calculating the similarity between the current image frame and each image frame to be selected after the current image frame is executed, until the updated image frame to be selected does not include the image frame to be selected after the current image frame, and the plurality of target image frames are obtained.
5. The method according to any of claims 1-3, wherein said generating a preview motion picture from said plurality of target image frames comprises:
acquiring a preset number of target image frames from the plurality of target image frames;
and generating a preview moving picture according to the preset number of target image frames.
6. The method according to claim 5, wherein said acquiring a preset number of target image frames in said plurality of target image frames comprises:
determining a target image frame with the largest weight in the plurality of target image frames;
judging whether the number of the target image frame with the maximum weight and all image frames behind the target image frame with the maximum weight meet the preset number or not;
if yes, starting from the target image frame with the maximum weight, and backwards acquiring a preset number of target image frames;
if not, acquiring all target image frames behind the target image frame with the maximum weight, determining the difference value between the number of all target image frames behind the target image frame with the maximum weight and the preset number, acquiring part of target image frames before the target image frame with the maximum weight according to the difference value, and acquiring the preset number of target image frames.
7. The method according to claim 5, wherein the generating a preview motion picture according to the preset number of target image frames comprises:
reversely ordering the preset number of target image frames to obtain the reversely ordered preset number of target image frames;
adding the preset number of target image frames and the preset number of target image frames subjected to reverse sequencing to obtain an image frame list;
and generating a preview moving picture according to the image frames in the image frame list.
8. A video preview moving picture generating apparatus, comprising:
the acquisition module is used for acquiring all image frames to be selected corresponding to the video to be processed;
the screening module is used for screening all the image frames to be selected to obtain a plurality of target image frames, wherein the similarity of any two target image frames is lower than a preset threshold value;
and the generating module is used for generating a preview image according to the plurality of target image frames.
9. The apparatus of claim 8, wherein the obtaining module is configured to:
extracting all image frames corresponding to the video to be processed, and arranging the image frames according to the time sequence of the image frames in the video to be processed;
for each image frame, sequentially adopting a plurality of preset identification rules to carry out identification operation on the image frame to obtain an identification result;
determining weight information corresponding to each image frame according to the identification result;
and taking a plurality of image frames with weight information not being zero as the image frames to be selected.
10. The apparatus of claim 9, wherein the obtaining module is configured to:
sequentially adopting a plurality of preset recognition models to perform recognition operation on the image frame so as to determine whether the image frame comprises a preset recognition object or not and obtain a recognition result;
the determining the weight information corresponding to each image frame according to the identification result comprises the following steps:
and for each recognition model, if the recognition result is that the image frame comprises a recognition object corresponding to the recognition model, updating the weight of the image frame according to the weight corresponding to the recognition model.
11. The apparatus of any one of claims 8-10, wherein the screening module is configured to:
acquiring a first image frame to be selected from all image frames to be selected as a current image frame according to the time sequence of the image frames to be selected in a video to be processed, and sequentially calculating the similarity between the current image frame and each image frame to be selected behind the current image frame;
if the similarity between the current image frame and any image frame to be selected is greater than a preset threshold value, updating the weight of the image frame to be selected, the similarity between the current image frame and any image frame to be selected is greater than the preset threshold value, to zero, and taking the current image frame as the target image frame;
judging whether the image frame to be selected exists behind the current image frame in the image frame to be selected with the current weight not equal to zero;
if the current image frame exists, the image frame to be selected after the current image frame is used as the current image frame, the step of sequentially calculating the similarity between the current image frame and each image frame to be selected after the current image frame is executed, until the updated image frame to be selected does not include the image frame to be selected after the current image frame, and the plurality of target image frames are obtained.
12. The apparatus of any one of claims 8-10, wherein the generating module is configured to:
acquiring a preset number of target image frames from the plurality of target image frames;
and generating a preview moving picture according to the preset number of target image frames.
13. The apparatus of claim 12, wherein the generating module is configured to:
determining a target image frame with the largest weight in the plurality of target image frames;
judging whether the number of the target image frame with the maximum weight and all image frames behind the target image frame with the maximum weight meet the preset number or not;
if yes, starting from the target image frame with the maximum weight, and backwards acquiring a preset number of target image frames;
if not, acquiring all target image frames behind the target image frame with the maximum weight, determining the difference value between the number of all target image frames behind the target image frame with the maximum weight and the preset number, acquiring part of target image frames before the target image frame with the maximum weight according to the difference value, and acquiring the preset number of target image frames.
14. The apparatus of claim 12, wherein the generating module is configured to:
reversely ordering the preset number of target image frames to obtain the reversely ordered preset number of target image frames;
adding the preset number of target image frames and the preset number of target image frames subjected to reverse sequencing to obtain an image frame list;
and generating a preview moving picture according to the image frames in the image frame list.
15. An electronic device, comprising: a memory, a processor;
a memory; a memory for storing the processor-executable instructions;
wherein the processor is configured to call program instructions in the memory to execute the video preview motion picture generating method according to any of claims 1-7.
16. A computer-readable storage medium having stored thereon computer-executable instructions for implementing the video preview motion picture generating method according to any one of claims 1 to 7 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011295278.8A CN113761275A (en) | 2020-11-18 | 2020-11-18 | Video preview moving picture generation method, device and equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011295278.8A CN113761275A (en) | 2020-11-18 | 2020-11-18 | Video preview moving picture generation method, device and equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113761275A true CN113761275A (en) | 2021-12-07 |
Family
ID=78786149
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011295278.8A Pending CN113761275A (en) | 2020-11-18 | 2020-11-18 | Video preview moving picture generation method, device and equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113761275A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114627036A (en) * | 2022-03-14 | 2022-06-14 | 北京有竹居网络技术有限公司 | Multimedia resource processing method and device, readable medium and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160078297A1 (en) * | 2014-09-17 | 2016-03-17 | Xiaomi Inc. | Method and device for video browsing |
US20180089203A1 (en) * | 2016-09-23 | 2018-03-29 | Adobe Systems Incorporated | Providing relevant video scenes in response to a video search query |
CN110290320A (en) * | 2019-06-27 | 2019-09-27 | Oppo广东移动通信有限公司 | Video preview drawing generating method and device, electronic equipment, computer readable storage medium |
KR20190119229A (en) * | 2018-04-04 | 2019-10-22 | 한국과학기술연구원 | Method for generatinig video synopsis by identifying target object using plurality of image devices and system for performing the same |
WO2020052084A1 (en) * | 2018-09-13 | 2020-03-19 | 北京字节跳动网络技术有限公司 | Video cover selection method, device and computer-readable storage medium |
CN111182359A (en) * | 2019-12-30 | 2020-05-19 | 咪咕视讯科技有限公司 | Video preview method, video frame extraction method, video processing device and storage medium |
WO2020151300A1 (en) * | 2019-01-25 | 2020-07-30 | 平安科技(深圳)有限公司 | Deep residual network-based gender recognition method and apparatus, medium, and device |
-
2020
- 2020-11-18 CN CN202011295278.8A patent/CN113761275A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160078297A1 (en) * | 2014-09-17 | 2016-03-17 | Xiaomi Inc. | Method and device for video browsing |
US20180089203A1 (en) * | 2016-09-23 | 2018-03-29 | Adobe Systems Incorporated | Providing relevant video scenes in response to a video search query |
KR20190119229A (en) * | 2018-04-04 | 2019-10-22 | 한국과학기술연구원 | Method for generatinig video synopsis by identifying target object using plurality of image devices and system for performing the same |
WO2020052084A1 (en) * | 2018-09-13 | 2020-03-19 | 北京字节跳动网络技术有限公司 | Video cover selection method, device and computer-readable storage medium |
WO2020151300A1 (en) * | 2019-01-25 | 2020-07-30 | 平安科技(深圳)有限公司 | Deep residual network-based gender recognition method and apparatus, medium, and device |
CN110290320A (en) * | 2019-06-27 | 2019-09-27 | Oppo广东移动通信有限公司 | Video preview drawing generating method and device, electronic equipment, computer readable storage medium |
CN111182359A (en) * | 2019-12-30 | 2020-05-19 | 咪咕视讯科技有限公司 | Video preview method, video frame extraction method, video processing device and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114627036A (en) * | 2022-03-14 | 2022-06-14 | 北京有竹居网络技术有限公司 | Multimedia resource processing method and device, readable medium and electronic equipment |
CN114627036B (en) * | 2022-03-14 | 2023-10-27 | 北京有竹居网络技术有限公司 | Processing method and device of multimedia resources, readable medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106557768B (en) | Method and device for recognizing characters in picture | |
CN105843615B (en) | Notification message processing method and device | |
EP3125135A1 (en) | Picture processing method and device | |
US10509540B2 (en) | Method and device for displaying a message | |
CN108985176B (en) | Image generation method and device | |
EP3316527A1 (en) | Method and device for managing notification messages | |
CN105631803B (en) | The method and apparatus of filter processing | |
CN108038102B (en) | Method and device for recommending expression image, terminal and storage medium | |
CN106372204A (en) | Push message processing method and device | |
CN106534951B (en) | Video segmentation method and device | |
CN111523346B (en) | Image recognition method and device, electronic equipment and storage medium | |
CN107341509B (en) | Convolutional neural network training method and device and readable storage medium | |
CN108320208B (en) | Vehicle recommendation method and device | |
EP3040912A1 (en) | Method and device for classifying pictures | |
CN110796094A (en) | Control method and device based on image recognition, electronic equipment and storage medium | |
CN113676671B (en) | Video editing method, device, electronic equipment and storage medium | |
CN109685041B (en) | Image analysis method and device, electronic equipment and storage medium | |
CN106331328B (en) | Information prompting method and device | |
US20220222831A1 (en) | Method for processing images and electronic device therefor | |
CN106354504A (en) | Message display method and device thereof | |
CN105323152A (en) | Message processing method, device and equipment | |
CN104850643B (en) | Picture comparison method and device | |
CN113032627A (en) | Video classification method and device, storage medium and terminal equipment | |
CN112948704A (en) | Model training method and device for information recommendation, electronic equipment and medium | |
CN108984098B (en) | Information display control method and device based on social software |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |