CN110809166A - Video data processing method and device and electronic equipment - Google Patents
Video data processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN110809166A CN110809166A CN201911052152.5A CN201911052152A CN110809166A CN 110809166 A CN110809166 A CN 110809166A CN 201911052152 A CN201911052152 A CN 201911052152A CN 110809166 A CN110809166 A CN 110809166A
- Authority
- CN
- China
- Prior art keywords
- video
- image frame
- attribute
- preset
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure provides a video data processing method, an apparatus and an electronic device, wherein the method includes: acquiring video information of a target video, wherein the video information comprises video duration; determining whether the video time length is smaller than a preset time length threshold value; in response to the fact that the video duration is smaller than a preset duration threshold value, detecting whether the visual attribute of each image frame included in the video meets a preset visual attribute condition; and determining the image frames meeting the preset visual attribute condition as target images corresponding to the video, and replacing the target video with the target images. The video with too short duration can be converted into the picture, so that the occupation of the video on the storage space of the terminal is reduced, and the use experience of a user is improved.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a video data processing method and apparatus, and an electronic device.
Background
With the progress of electronic technology, smart phones and tablet computers are increasingly popularized, users can shoot videos anytime and anywhere, and the users record scenes needing to be stored through equipment videos.
When a user shoots, the video may contain a large number of content similar image frames due to too short shooting time, and the similar content occupies a large amount of storage space.
Disclosure of Invention
The present disclosure aims to provide a method, an apparatus, and an electronic device for processing video data, so as to solve the technical problem in the prior art, so that a video recorded by a user with too short duration can be automatically decoded into a picture, and occupation of a storage space by a repeated image frame in the too short video is reduced.
In order to achieve the above object, the present disclosure provides a method for processing video data to optimize processing of a video and improve user experience.
In a first aspect, an embodiment of the present disclosure provides a method for video data processing, where the method includes: acquiring video information of a target video, wherein the video information comprises video duration; determining whether the video time length is smaller than a preset time length threshold value; in response to the fact that the video duration is smaller than a preset duration threshold value, detecting whether the visual attribute of each image frame included in the video meets a preset visual attribute condition; and determining the image frames meeting the preset visual attribute condition as target images corresponding to the video, and replacing the target video with the target images.
In a second aspect, an embodiment of the present disclosure provides an apparatus for video data processing, including: the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring video information of a target video, and the video information comprises video duration; the determining module is used for determining whether the video time length is smaller than a preset time length threshold value; the detection module is used for responding to the fact that the video duration is smaller than a preset duration threshold value, and detecting whether the visual attribute of each video frame included in the video meets a preset visual attribute condition or not; and the replacing module is used for determining the video frame meeting the preset visual attribute condition as a target image corresponding to the video and replacing the target video with the target image.
In a third aspect, an electronic device includes: one or more processors; a storage device on which one or more programs are stored, the one or more programs, when executed by the one or more processors, causing the one or more processors to implement the information processing method of the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a non-transitory computer-readable storage medium, on which executable instructions are stored, and when the executable instructions are executed on a processor, the information processing method according to the first aspect is implemented.
The invention provides a video data processing method, a video data processing device and electronic equipment, which can convert videos with too short time into pictures and improve the phenomenon of large memory occupation caused by the fact that the videos comprise a plurality of repeated frames.
Drawings
The accompanying drawings are included to provide a better understanding of the present disclosure, and are not to be construed as limiting the present disclosure in any way, wherein:
FIG. 1 is a flow diagram of one embodiment of a video data processing method of the present disclosure;
FIG. 2 is a flow diagram of another embodiment of a video data processing method of the present disclosure;
fig. 3 is a schematic structural diagram of a video data processing apparatus of the present disclosure;
fig. 4 is a schematic diagram of a system architecture to which the video data processing method of the present disclosure is applied;
FIG. 5 is a schematic block diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosed invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or combined. Moreover, method embodiments may perform additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect. The term 'include' and its variants, as used herein, are open-ended, i.e. 'include, but are not limited to'. The term 'based on' is 'based, at least in part, on'. The term 'one embodiment' means 'at least one embodiment'; the term 'another embodiment' means 'at least one further embodiment'; the term 'some embodiments' means 'at least some embodiments'. Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", etc. mentioned in the disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the sequence or interdependence of the functions executed by the devices, modules or units.
It is noted that references to 'a' or 'a' modification in this disclosure are illustrative rather than limiting and those skilled in the art will understand that reference to 'one or more' unless the context clearly dictates otherwise.
It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
Referring to fig. 1, a flow chart of one embodiment of a video data processing method of the present disclosure is shown. As shown in fig. 1, the video data processing method includes the following steps:
step 101, obtaining video information of a target video, wherein the video information comprises video duration.
The target video in this embodiment may be a normal video, a short video, a small video, or the like.
The target video may be a video stored locally or a video stored in another electronic device.
The video information of the video may include a name, a video capturing location, video source information, a video duration, and the like.
The video duration here may be determined by the total number of frames of the image frames included in the video and the number of picture refreshes per second of the device. The number of screen refreshes per second may also be referred to as a frame rate. It should be noted that, the method for determining the video duration according to the number of image frames and the frame rate included in the video is a widely-used prior art at present, and is not described herein again.
Step 102, determining whether the video time length is less than a preset time length threshold value;
the preset time threshold may be any value greater than zero, such as 0.5s, 0.8s, and the like. The preset time threshold may be set according to a specific application scenario, and is not specifically limited herein.
Step 103, in response to determining that the video duration is less than the preset duration threshold, detecting whether the visual attribute of each image frame included in the video meets a preset visual attribute condition.
In the present embodiment, the visual attributes may include brightness, hue, and saturation.
For each visual attribute, a visual attribute condition corresponding to the visual attribute may be preset.
For each image frame, it may be determined respectively whether each visual attribute of the image frame satisfies a corresponding visual attribute condition. To determine whether each visual attribute of the image frame satisfies the corresponding visual attribute condition.
And 104, determining the image frames meeting the preset visual attribute condition as target images corresponding to the video, and replacing the target video with the target images.
In some application scenarios, if each data attribute of only one image frame in a plurality of image frames of a target video respectively satisfies a corresponding preset visual attribute condition, the image frame may be determined as a target image.
In some other application scenarios, the data attributes of at least two image frames included in the plurality of image frames of the target video respectively satisfy respective corresponding preset visual attribute conditions, so that the target video has two candidate target images, and a first candidate target image is selected from the at least two candidate target images as a target image according to the sequence of the image frame sequences corresponding to the target video and corresponding to the candidate target images, wherein the respective visual attributes of each candidate target image respectively satisfy respective corresponding preset visual attribute conditions.
In the method for processing video data provided by this embodiment, video information of a target video is first obtained, where the video information includes a video duration; then, determining whether the video time length is smaller than a preset time length threshold value; then, in response to the fact that the video duration is smaller than a preset duration threshold value, whether the visual attribute of each image frame included in the video meets a preset visual attribute condition is detected; and finally, determining the image frames meeting the preset visual attribute condition as target images corresponding to the video, and replacing the target video with the target images. The video data processing method has the advantages that when the video duration is too short, the picture content occupies more storage space due to too many similar pictures, and good user experience cannot be provided for users.
With continuing reference to fig. 2, fig. 2 is a flow chart illustrating another embodiment of the video data processing method of the present disclosure.
In step 201, shooting end information of the target video is received.
The shooting end information may be information generated by clicking an end button during shooting, exiting a video shooting interface, and storing a video in the terminal storage device. After receiving the shooting end information of the target video, step 202 may be entered.
Step 202, in response to receiving the shooting end information, obtaining video information of the target video, wherein the video information includes a video duration.
The target video in this embodiment may be a normal video, a short video, a small video, or the like. The target video may be a video stored locally or a video stored in another electronic device. The video information of the video may include a video name, a video capturing location, video source information, a video duration, and the like.
The video duration is determined by dividing the total number of frames of image frames included in the video by the number of screen refreshes per second (frame rate) of the device, for example, if the total number of frames of a video is 4343 and the frame rate is 20fps, the video duration is 4343/20 ≈ 217 s.
The video of the present embodiment may be a video generated after the user completes shooting.
For example, when a user shoots a video, the user clicks to finish shooting, and the video is finished shooting and stored in the storage space of the terminal. And the terminal equipment generates shooting end information after finishing shooting and storing the instruction executed by the user. After receiving the shooting end information of the target video, the terminal equipment acquires the total frame number 30 and the frame rate 30fps of the video, and calculates the video duration to be 1 second. After the video duration of the target video is calculated in response to the reception of the shooting end information of the target video, the next step 203 is performed.
Step 203, determining whether the video time length is less than a preset time length threshold value;
the preset time threshold of the present embodiment may be any value greater than zero. E.g., 0.5s, 0.8s, etc. The preset duration threshold may be set according to a specific application scenario, which is not described herein.
If the video duration is less than the preset duration threshold, go to step 204.
And if the video time length of the video is determined to be greater than the preset time length threshold, the video does not accord with the video time length requirement of the method, and the video data processing operation is quitted.
In this embodiment, if the user finishes the shooting operation, the video duration is calculated to be 0.5s according to the total frame number and the frame rate. If the preset time length threshold value is 0.6s and the video time length is less than the preset time length threshold value, entering step 204; and if the preset time length threshold value is 0.4s and the video time length is greater than the preset time length threshold value, the video data processing operation is quitted.
Step 204, in response to determining that the video duration is smaller than the preset duration threshold, for each image frame included in the target video, obtaining an attribute value corresponding to each visual attribute of the image frame.
And detecting whether the visual attribute of each image frame included in the video meets a preset visual attribute condition.
For each visual attribute, the visual attribute corresponding to the image frame is determined based on the attribute value of the visual attribute corresponding to each pixel of the image frame. The visual attributes include, among others, brightness, hue, and saturation.
For example, in response to that the video duration (0.4s) for the user to capture a video is less than the preset duration threshold (0.5s), respectively acquiring attribute values corresponding to the visual attributes of each image frame of the video, if the attribute value intervals corresponding to the brightness, the hue, and the saturation are [ 58%, 80% ], [ 40%, 60% ], [ 0%, 45% ], respectively, and if only one of the video frames of each of the videos has a brightness attribute value of 58%, a hue attribute value of 45%, and a saturation attribute value of 30%, the visual attribute of the video matches the preset visual attribute value, then step 205 is performed.
Step 205, determining the image frame meeting the preset visual attribute condition as a target image corresponding to the video, and replacing the target video with the target image.
In some application scenes, if a plurality of image frames corresponding to a video comprise at least two candidate target images, determining the target image from the at least two candidate target images according to the sequence of the candidate target images in the image frame sequence corresponding to the video respectively; and the visual attributes of each candidate target image respectively meet the corresponding preset visual attribute conditions.
And respectively acquiring attribute values corresponding to the visual attributes of each image frame of the video. Taking preset value ranges of preset attribute values of brightness, hue and saturation as [ 58%, 80% ], [ 40%, 60% ], [ 0%, 45% ] respectively as an example, if the brightness attribute values of the visual attributes of two video frames in each video frame are 58% and 57%, the hue attribute values are 45%, and the saturation attribute values are 45%, the two video frames both accord with the preset attribute values, and the first image frame which accords with the preset visual attribute condition is selected as the target image according to the sequence of the video frames which accord with the preset attribute in the image frame sequence corresponding to the target video.
If at least one of the brightness, hue and saturation of each image frame of the target video does not meet the preset visual attribute condition, selecting one image frame closest to the preset brightness attribute threshold as the target image; and if at least two image frames with the same brightness are close to a preset brightness threshold value, selecting the first image frame as the target image according to the sequence of the image frame sequences corresponding to the target video.
And if each image frame of the video does not meet the range of the preset attribute value, selecting one image frame closest to the preset attribute value of the brightness as the target image. For example, the visual attribute of each image frame of the acquired video does not satisfy the preset visual attribute value, wherein the brightness attribute value of an image frame closest to the brightness of the preset visual attribute value is 57%, the hue attribute value is 70%, and the saturation attribute value is 50%, and the image frame is selected as the target image.
In this embodiment, for a video that is just shot by a user, when receiving a save instruction executed by the user, the terminal determines whether video data processing is required according to the video duration. Therefore, the storage space can be saved, and the performance of the equipment can be improved. .
Referring further to fig. 3, as an implementation of the methods shown in the above-mentioned figures, a schematic structural diagram of an embodiment of a video data processing apparatus according to the present disclosure is provided, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus may be specifically applied to various electronic devices.
Referring to fig. 3, the video data processing apparatus includes: the video information acquisition device comprises an acquisition module 301, a determination module 302, a detection module 303 and a replacement module 304, wherein the determination module 302 receives video information acquired from the acquisition module 301; the detection module 303 responds to information whether the acquired video information sent by the determination module 302 meets a preset threshold condition; the replacement module 304 receives the image frames meeting the preset visual attribute condition detected by the detection module 303.
In this embodiment, specific processing of the obtaining module 301, the determining module 302, the detecting module 303, and the replacing module 304 in the video data processing apparatus and the brought technical effects thereof may refer to related descriptions of step 101, step 102, step 103, and step 104 in the corresponding embodiment of fig. 1, which are not described herein again.
In some optional implementation manners, the obtaining module is further applied to, before obtaining the video information of the target video, receive shooting end information of the target video from a user, and in response to receiving the shooting end information, obtain the video information of the target video.
In some optional implementation manners, the detection module is further configured to, in response to information that whether the obtained video information sent by the determination module meets a preset threshold condition, detect each image frame of the video if the obtained video information meets the preset threshold condition, and obtain attribute values corresponding to each visual attribute of the image frame; and for each image frame, determining the attribute value of the visual attribute corresponding to the image frame based on the attribute value of the visual attribute corresponding to the pixels of the image frame.
In some optional implementations, the replacing module is further applied to include, in a plurality of image frames corresponding to a video, at least two candidate target images, and determine the target image from the at least two candidate target images according to an order of the candidate target images in the image frame sequence corresponding to the video; and the visual attributes of each candidate target image respectively meet the corresponding preset conditions.
In some alternative implementations, the video data processing apparatus further includes a receiving module (not shown in the figures). The receiving module is used for: before the acquisition module acquires video information of a target video, receiving a shooting end instruction of the target video; and the obtaining module 301 is further configured to: and responding to the received shooting end instruction, and acquiring video information of the target video.
Referring to fig. 4, a system architecture diagram of a video data processing method applied thereto is shown.
As shown in fig. 4, the system architecture may include terminal devices 401, 402, 403, a network 404, and a server 405. The network 404 serves as a medium for providing communication links between the terminal devices 401, 402, 403 and the server 405. Network 404 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few. The terminal devices and servers described above may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ('LAN'), a wide area network ('WAN'), an internet network (e.g., the internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network.
The terminal devices 401, 402, 403 may interact with a server 405 over a network 404 to receive or send messages or the like. Various clients, such as video processing type applications, may be installed on the terminal devices 401, 402, 403.
The terminal devices 401, 402, and 403 may be hardware or software. When the terminal devices 401, 402, and 403 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP4(Moving Picture Experts Group Audio Layer IV) players, laptop portable computers, desktop computers, and the like. When the terminal devices 401, 402, and 403 are software, they may be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 405 may be a server that can provide various services, for example, receives a request sent by the terminal apparatus 401, 402, 403, performs analysis processing on video data, and sends the analysis processing result (e.g., video data corresponding to the above-described acquisition request) to the terminal apparatus 401, 402, 403.
It should be noted that the video data processing method provided by the embodiment of the present disclosure is generally executed by the terminal devices 401, 402, and 403, and accordingly, the video data processing apparatus is generally disposed in the terminal devices 401, 402, and 403.
It should be understood that the number of terminal devices, networks, and servers in fig. 4 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for display.
Referring now to FIG. 5, a schematic diagram of an electronic device (e.g., the server of FIG. 4) suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data described in the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM502, and the RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, devices may be connected to the I/O interface 505, including input devices 506 such as touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the computing device to communicate with other devices, either wirelessly or by wire, to exchange data. While fig. 5 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, where appropriate in accordance with the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable tangible medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted over any appropriate medium, including but not limited to: electrical wires, cables, optical fibers, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring video information of a target video, wherein the video information comprises video duration; determining whether the video time length is smaller than a preset time length threshold value; in response to the fact that the video duration is smaller than a preset duration threshold value, detecting whether the visual attribute of each image frame included in the video meets a preset visual attribute condition; and determining the image frames meeting the preset visual attribute condition as target images corresponding to the video, and replacing the target video with the target images.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the 'C' programming language or similar programming languages. The program code may execute entirely on the user's computer. Partially on the user's computer, partially on a remote computer, or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not in some cases constitute a limitation to the module itself, and for example, the acquiring module may also be described as a 'module for acquiring video information of a target video'.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary strong hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description discloses only the preferred embodiment and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features with similar functions disclosed in the present disclosure are mutually replaced to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (10)
1. A method of processing video data, comprising:
acquiring video information of a target video, wherein the video information comprises video duration;
determining whether the video time length is smaller than a preset time length threshold value;
in response to the fact that the video duration is smaller than a preset duration threshold value, detecting whether the visual attribute of each image frame included in the video meets a preset visual attribute condition;
and determining the image frames meeting the preset visual attribute condition as target images corresponding to the video, and replacing the target video with the target images.
2. The method of claim 1, wherein the visual attributes comprise brightness, hue, and saturation; and the method further comprises:
for each image frame, acquiring attribute values corresponding to each visual attribute of the image frame; and
the detecting whether the visual attribute of each image frame included in the video meets a preset visual attribute condition includes:
and for each image frame, respectively determining whether the attribute value of each visual attribute corresponding to the image frame meets the corresponding preset attribute condition.
3. The method according to claim 2, wherein for each image frame, obtaining attribute values corresponding to respective visual attributes of the image frame comprises:
acquiring attribute values of visual attributes corresponding to pixels of the image frame;
for each visual attribute, determining the attribute value of the visual attribute corresponding to the image frame based on the attribute value of the visual attribute corresponding to each pixel of the image frame.
4. The method according to claim 1, wherein the determining the image frames satisfying the preset visual attribute condition as the target images corresponding to the video and replacing the target video with the target images comprises:
the method comprises the steps that a plurality of image frames corresponding to a video comprise at least two candidate target images, and the target images are determined from the at least two candidate target images according to the sequence of the candidate target images in an image frame sequence corresponding to the video, wherein the sequence of the candidate target images corresponds to the image frame sequence; wherein
And the visual attributes of each candidate target image respectively meet the corresponding preset conditions.
5. The method according to any one of claims 1-4, wherein before the obtaining the video information of the target video, the method further comprises:
receiving a shooting end instruction of the target video; and
the acquiring of the video information of the target video includes:
and responding to the received shooting end instruction, and acquiring video information of the target video.
6. A video data processing apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring video information of a target video, and the video information comprises video duration;
the determining module is used for determining whether the video time length is smaller than a preset time length threshold value;
the detection module is used for responding to the fact that the video duration is smaller than a preset duration threshold value, and detecting whether the visual attribute of each video frame included in the video meets a preset visual attribute condition or not;
and the replacing module is used for determining the video frame meeting the preset visual attribute condition as a target image corresponding to the video and replacing the target video with the target image.
7. The apparatus of claim 6, wherein the visual attributes comprise brightness, hue, and saturation; the device further comprises an attribute value acquisition module, wherein the attribute value acquisition module is used for:
for each image frame, acquiring attribute values of visual attributes corresponding to pixels of the image frame; and
the detection module is further to: for each visual attribute, determining the attribute value of the visual attribute corresponding to the image frame based on the attribute value of the visual attribute corresponding to each pixel of the image frame.
8. The apparatus of claim 6, wherein the replacement module is further applied to:
the method comprises the steps that a plurality of image frames corresponding to a video comprise at least two candidate target images, and the target images are determined from the at least two candidate target images according to the sequence of the candidate target images in an image frame sequence corresponding to the video, wherein the sequence of the candidate target images corresponds to the image frame sequence; wherein
And the visual attributes of each candidate target image respectively meet the corresponding preset conditions.
9. An electronic device, comprising:
one or more processors;
storage means having one or more programs stored thereon which, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-5.
10. A non-transitory computer readable storage medium having stored thereon executable instructions that, when executed on a processor, implement the method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911052152.5A CN110809166B (en) | 2019-10-31 | 2019-10-31 | Video data processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911052152.5A CN110809166B (en) | 2019-10-31 | 2019-10-31 | Video data processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110809166A true CN110809166A (en) | 2020-02-18 |
CN110809166B CN110809166B (en) | 2022-02-11 |
Family
ID=69489779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911052152.5A Active CN110809166B (en) | 2019-10-31 | 2019-10-31 | Video data processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110809166B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287169A (en) * | 2020-10-29 | 2021-01-29 | 字节跳动有限公司 | Data acquisition method, device and system, electronic equipment and storage medium |
CN117135444A (en) * | 2023-03-10 | 2023-11-28 | 荣耀终端有限公司 | Frame selection decision method and device based on reinforcement learning |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104954718A (en) * | 2015-06-03 | 2015-09-30 | 惠州Tcl移动通信有限公司 | Mobile intelligent terminal and image recording method thereof |
CN105100893A (en) * | 2014-04-21 | 2015-11-25 | 联想(北京)有限公司 | Video sharing method and device |
US20160269774A1 (en) * | 2008-09-15 | 2016-09-15 | Entropic Communications, Llc | Systems and Methods for Providing Fast Video Channel Switching |
CN108062507A (en) * | 2016-11-08 | 2018-05-22 | 中兴通讯股份有限公司 | A kind of method for processing video frequency and device |
-
2019
- 2019-10-31 CN CN201911052152.5A patent/CN110809166B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160269774A1 (en) * | 2008-09-15 | 2016-09-15 | Entropic Communications, Llc | Systems and Methods for Providing Fast Video Channel Switching |
CN105100893A (en) * | 2014-04-21 | 2015-11-25 | 联想(北京)有限公司 | Video sharing method and device |
CN104954718A (en) * | 2015-06-03 | 2015-09-30 | 惠州Tcl移动通信有限公司 | Mobile intelligent terminal and image recording method thereof |
CN108062507A (en) * | 2016-11-08 | 2018-05-22 | 中兴通讯股份有限公司 | A kind of method for processing video frequency and device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287169A (en) * | 2020-10-29 | 2021-01-29 | 字节跳动有限公司 | Data acquisition method, device and system, electronic equipment and storage medium |
CN112287169B (en) * | 2020-10-29 | 2024-04-26 | 字节跳动有限公司 | Data acquisition method, device and system, electronic equipment and storage medium |
CN117135444A (en) * | 2023-03-10 | 2023-11-28 | 荣耀终端有限公司 | Frame selection decision method and device based on reinforcement learning |
Also Published As
Publication number | Publication date |
---|---|
CN110809166B (en) | 2022-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111436005B (en) | Method and apparatus for displaying image | |
CN111385484B (en) | Information processing method and device | |
CN112351222B (en) | Image special effect processing method and device, electronic equipment and computer readable storage medium | |
CN111163324B (en) | Information processing method and device and electronic equipment | |
CN110809166B (en) | Video data processing method and device and electronic equipment | |
CN113038176B (en) | Video frame extraction method and device and electronic equipment | |
CN112351221A (en) | Image special effect processing method and device, electronic equipment and computer readable storage medium | |
CN110399802B (en) | Method, apparatus, medium, and electronic device for processing eye brightness of face image | |
CN111783632A (en) | Face detection method and device for video stream, electronic equipment and storage medium | |
CN110719407A (en) | Picture beautifying method, device, equipment and storage medium | |
CN114125485B (en) | Image processing method, device, equipment and medium | |
CN114528433B (en) | Template selection method and device, electronic equipment and storage medium | |
CN111709782B (en) | Information interaction method and device and electronic equipment | |
CN110401603B (en) | Method and device for processing information | |
CN112312200A (en) | Video cover generation method and device and electronic equipment | |
CN113066166A (en) | Image processing method and device and electronic equipment | |
CN112804457B (en) | Photographing parameter determination method and device and electronic equipment | |
CN115114463B (en) | Method and device for displaying media content, electronic equipment and storage medium | |
CN110087145B (en) | Method and apparatus for processing video | |
CN113794836B (en) | Bullet time video generation method, device, system, equipment and medium | |
CN113014980B (en) | Remote control method and device and electronic equipment | |
CN118411537A (en) | Feature point screening method and device and electronic equipment | |
CN118365905A (en) | Feature point screening method and device and electronic equipment | |
CN118521485A (en) | Image processing method, device, electronic equipment and storage medium | |
CN112258408A (en) | Information display method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |