CN112770167A - Video display method and device, intelligent display terminal and storage medium - Google Patents

Video display method and device, intelligent display terminal and storage medium Download PDF

Info

Publication number
CN112770167A
CN112770167A CN202011523205.XA CN202011523205A CN112770167A CN 112770167 A CN112770167 A CN 112770167A CN 202011523205 A CN202011523205 A CN 202011523205A CN 112770167 A CN112770167 A CN 112770167A
Authority
CN
China
Prior art keywords
target video
frames
video
target
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011523205.XA
Other languages
Chinese (zh)
Inventor
查成明
王云华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN202011523205.XA priority Critical patent/CN112770167A/en
Publication of CN112770167A publication Critical patent/CN112770167A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The invention discloses a video display method and device, an intelligent display terminal and a storage medium. The video display method comprises the following steps: screening a plurality of target video frames with target video frame characteristics from a video to be processed; synthesizing the plurality of target video frames into at least one target video data; dividing a display interface into at least one display area according to the at least one target video data, wherein the number of the display areas is greater than or equal to that of the target video data; and playing the at least one target video data through at least one display area. By adopting the video display method, the time for watching the part which is not interested by the user can be saved, the efficiency of transmitting information during video playing is improved, and the viewing experience of the user is further improved.

Description

Video display method and device, intelligent display terminal and storage medium
Technical Field
The invention relates to the technical field of video processing, in particular to a video display method and device, an intelligent display terminal and a storage medium.
Background
With the continuous development of the internet, videos have been used as important carriers for users to obtain information or meet entertainment requirements, such as game teaching videos.
However, in the related art, only a part of or a plurality of video frames are interested or needed in the whole video by the user, i.e. the existing video information is low in communication efficiency.
Disclosure of Invention
The invention mainly aims to provide a video display method, a video display device, an intelligent display terminal and a storage medium, and aims to solve the technical problem that the information transmission efficiency is low when a video is played in the prior art.
In order to achieve the above object, in a first aspect, the present invention provides a video display method, including the following steps:
screening a plurality of target video frames with target video frame characteristics from a video to be processed;
synthesizing the plurality of target video frames into at least one target video data;
dividing a display interface into at least one display area according to the at least one target video data, wherein the number of the display areas is greater than or equal to that of the target video data;
and playing the at least one target video data through at least one display area.
Optionally, the target video frame characteristics include a target picture pixel area;
screening out a plurality of target video frames with target video frame characteristics from a video to be processed, and the method comprises the following steps:
and screening a plurality of target video frames from the plurality of video frames based on the pixel area of the target picture, wherein each target video frame comprises a pixel area matched with the pixel area of the target picture.
Optionally, each target video data includes a plurality of consecutive target video frames;
compositing the plurality of target video frames into at least one target video data, comprising:
screening a plurality of associated video frames from the plurality of target video frames based on the pixel overlap ratio among the plurality of target video frames;
and synthesizing the plurality of associated video frames into the target video data based on the time stamps of the associated video frames.
Optionally, the multiple associated video frames all have the same intersection pixel region, and a ratio of an area of the intersection pixel region to an area of the person picture pixel is greater than a first preset threshold.
Optionally, the screening, based on a pixel coincidence ratio between the multiple target video frames, multiple associated video frames from the multiple target video frames includes:
obtaining a plurality of target video odd frames and a plurality of target video even frames based on the plurality of target video frames;
screening a plurality of associated video frames from a plurality of target video odd frames or a plurality of target video even frames according to the association degree of the target video frames;
synthesizing a plurality of associated video frames into target video data based on timestamps of the associated video frames, comprising:
and synthesizing a plurality of continuous target video frames corresponding to the plurality of associated video frames into target video data based on the associated video frame timestamps.
Optionally, before playing the at least one target video data through the at least one display area, the method further includes:
screening a plurality of first background frames with the same gray value from a plurality of target video frames;
screening at least one target background frame from the plurality of first background frames, wherein the ratio of pixel areas of a second background frame is greater than a second preset threshold;
and displaying the second background frame as a background picture of the display area.
Optionally, the screening out a plurality of first background frames with the same gray value from the plurality of target video frames includes:
obtaining a plurality of target video odd frames and a plurality of target video even frames based on the plurality of target video frames;
and screening a plurality of first background frames with the same gray value from a plurality of target video odd frames or a plurality of target video even frames.
Optionally, playing at least one target video data through at least one display area, including:
and playing at least one target video data through at least one display area, wherein each target video frame in each target video data shows a preset duration.
In a second aspect, the present invention further provides a video display apparatus, including:
a screening module; the video processing device is used for screening out a plurality of target video frames with target video frame characteristics from a video to be processed;
a synthesizing module for synthesizing the plurality of target video frames into at least one target video data;
the interface dividing module is used for dividing a display interface into at least one display area according to at least one target video data, wherein the number of the display areas is greater than or equal to that of the target video data;
and the playing module is used for playing the at least one target video data through the at least one display area.
In a third aspect, the present invention further provides an intelligent display terminal, where the intelligent display terminal includes: the system comprises a memory, a processor and a video presentation program stored on the memory and executable on the processor, the video presentation program being configured to implement the steps of the video presentation method.
In a fourth aspect, the present invention further provides a computer storage medium, where a video display program is stored on the computer storage medium, and the video display program, when executed by a processor, implements the steps of the video display method.
According to the technical scheme, all target video frames including target video frame characteristics in the video data are screened out and synthesized into the target video data, and the target video data are played on the display interface in a split screen mode, so that the time for a user to watch uninteresting parts is saved, the efficiency of information transmission during video playing is improved, and the viewing experience of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an intelligent display terminal according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a video display method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a video display method according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a video display method according to a third embodiment of the present invention;
FIG. 5 is a schematic view of a video display apparatus according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an intelligent display terminal in a hardware operating environment according to an embodiment of the present invention.
The intelligent display terminal may be a User Equipment (UE), such as smart glasses, a smart phone, a laptop, a tablet computer (PAD), a handheld device, a vehicle-mounted device, a wearable device, a computing device, or other display devices with a display screen.
In general, a smart display terminal includes: at least one processor 301, a memory 302, and a video presentation program stored on the memory and executable on the processor, the video presentation program configured to implement the steps of the previous video presentation program.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. The processor 301 may further include an AI (Artificial Intelligence) processor for processing relevant video presentation operations such that the video presentation model may train learning autonomously, improving efficiency and accuracy.
Memory 302 may include one or more computer-readable storage media, which may be non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the video presentation method provided by the method embodiments herein.
In some embodiments, the terminal may further include: a communication interface 303 and at least one peripheral device. The processor 301, the memory 302 and the communication interface 303 may be connected by a bus or signal lines. Various peripheral devices may be connected to communication interface 303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, a display screen 305, and a power source 306.
The communication interface 303 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 301 and the memory 302. The communication interface 303 is used to receive video data through peripheral devices. In some embodiments, processor 301, memory 302, and communication interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the communication interface 303 may be implemented on a single chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 304 communicates with a communication network and other communication devices through electromagnetic signals, so as to obtain the movement tracks and other data of a plurality of mobile terminals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 305 is a touch display screen, the display screen 305 also has the ability to capture touch signals on or over the surface of the display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. At this point, the display screen 305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 305 may be one, the front panel of the electronic device; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the electronic device or in a folded design; in still other embodiments, the display screen 305 may be a flexible display screen disposed on a curved surface or a folded surface of the electronic device. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (liquid crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The power supply 306 is used for supplying power to each component in the intelligent display terminal. The power source 306 may be alternating current, direct current, disposable or rechargeable. When the power source 306 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
It will be understood by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the information point validity recognition apparatus, and may include more or less components than those shown, or a combination of some components, or a different arrangement of components.
An embodiment of the present invention provides a video display method, and referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of the video display method according to the present invention. Referring to fig. 2, fig. 2 is a schematic flow chart illustrating a third embodiment of the method of the present invention.
In this embodiment, a video display method includes the following steps:
step S101, screening out a plurality of target video frames with target video frame characteristics from a video to be processed.
In this embodiment, the to-be-video data is a complete video with a certain time length, and at least some part of the complete video has relevance and continuity in time or in a scene. For example, the video data may be a complete game teaching video downloaded over a network, or a global playback video of a game. Multiple video frames of the same video data each have a corresponding timestamp.
Specifically, target video frames with the same target video frame characteristics are screened out from a plurality of video frames of the video to be processed. The target video frame feature may be an image, a text, or audio data or barrage data corresponding to the video frame on the screen of the video frame.
For example, for a complete teaching video, all target video frames with "beat" text on the screen can be selected from all video frames in the complete teaching video.
Alternatively, as an option to this embodiment, the target video frame characteristic comprises a target picture pixel area. At this time, in step S101, a plurality of target video frames are screened out from a plurality of video frames based on the target picture pixel area, and each target video frame includes a pixel region matching the target picture pixel area.
When the target picture is known, a model of the target picture can be constructed, and then a grid matrix is constructed by the length and width of all pixels in the model and the horizontal and vertical directions. The area of the grid matrix is the area of 4 grids, namely the area of the target picture pixel. And then calculating the difference value between the pixel area of all the video frames and the target pixel area, wherein the video frame with the obtained difference value close to 0 is the target video frame. That is, the video frame includes a target picture, and a region of the target video frame that is the same as the target picture is a pixel region.
It is easy to understand that when the target picture is a game character, a model of the game character can be constructed. All video frames including the game character are then screened from the plurality of video frames based on the pixel area of the character model.
Step S102, synthesizing a plurality of target video frames into at least one target video data.
The screened target video frames can be synthesized into at least one target video data with a redetermined time stamp so as to be played as a new frequency.
Step S103, dividing a display interface into at least one display area according to at least one target video data, wherein the number of the display areas is larger than or equal to that of the target video data.
It is easy to understand that the split-screen playing is the prior art, and those skilled in the art know how to implement the split-screen playing, and the details are not described herein. The number of the display areas is greater than or equal to the number of the target video data, namely, part of the display areas can be used for playing the target video data, each display area plays one section of the target video data, and the other part of the display areas can normally and completely play the video to be processed. Therefore, a user can conveniently obtain a plurality of videos at the same time, and the screened and synthesized target video data segment can be compared with the complete video to be processed.
Step S104, playing at least one target video data through at least one display area.
In this embodiment, at least one target video data may be obtained from the same video data, and in order to achieve simultaneous playing of multiple target video data, improve playing efficiency, and bring a more efficient viewing experience to a user, a display interface of the intelligent display terminal may be divided into at least one display area, and then the target video data may be played respectively.
For example, for a certain teaching video, a certain character operated by a plurality of players is included in a certain teaching video. But the user only needs to view the portion that includes a particular one of the characters. All target video frames containing the specific character can be determined from a plurality of video frames of the game teaching video, and then all video frames containing the specific character can be combined into two target videos. And then the display interface is divided into 2 sub-interfaces, and two target videos are played simultaneously through the 2 sub-interfaces, so that the user can directly and simultaneously enjoy the video of a specific person which the user wants to watch, and the part which the other users do not want to watch is not played.
It is readily understood that each user is only interested in a portion of a complete video. In the embodiment, all the target video frames including the target video frame characteristics in the video data are screened out and synthesized into the target video, and the target video is simultaneously played on the display interface, so that the time for watching uninteresting parts by a user is saved, the video playing efficiency is improved, and the watching experience of the user is improved.
Further, in step S104, each target video frame in each target video data is displayed for a preset duration. The display of each target video frame is prolonged to 30S, so that the viewing experience of a user is improved, the user can observe details in the video frames more carefully, and the information transmission efficiency of the video display is improved.
For example, for a game teaching video, the user needs to carefully watch all the details of the picture of each frame for a long time due to the excessive amount of information conveyed in each frame. The present embodiment can satisfy the above requirement by displaying the target video frame for a preset duration.
Based on the first embodiment of the information point validity identification method of the present invention, a second embodiment of the information point validity identification method of the present invention is proposed. Referring to fig. 3, fig. 3 is a schematic flow chart illustrating a second embodiment of the method of the present invention.
It will be readily appreciated that for a complete video, it may comprise a plurality of video frames having the characteristics of the target video frames of the user sensing area scattered, and what the user needs to watch is a continuous piece of video, rather than a logically discontinuous, stitched set of target video frames.
Therefore, in this embodiment, the video display method includes the following steps:
step S201, a plurality of target video frames with target video frame characteristics are screened from the video to be processed.
Step S202, a plurality of related video frames are screened out from a plurality of target video frames based on the pixel overlapping degree among the plurality of target video frames.
In step S203, a plurality of associated video frames are combined into target video data based on the timestamps of the associated video frames.
In the above steps, the pixel overlapping degree is used to represent the degree that a plurality of video frames have similar or identical pictures. And screening a plurality of associated video frames from the plurality of target video frames through the pixel coincidence degree. I.e. the associated video frames that are logically related are filtered out by the pixel overlap ratio. Thus, scattered target video frames including the target video frame characteristics in the same original video are eliminated.
In order to improve the accuracy of the screening, as an option of this embodiment, a plurality of associated video frames all have the same intersection pixel region, and a ratio of an area of the intersection pixel region to an area of a person picture pixel is greater than a first preset threshold.
Specifically, the screened associated video frames are compared pairwise, and have the same intersection pixel region in addition to the same target picture pixel, and on the temporally continuous video frames, the intersection pixel region and the target picture pixel region appear simultaneously, and the ratio of the area of the intersection pixel region to the area of the person picture pixel is greater than a first preset threshold. For example, the ratio of the area of the intersection pixel region to the area of the human picture pixel is greater than half.
That is, in this embodiment, whether any two target video frames in the multiple target video frames are logically continuous is determined simultaneously according to the target picture pixel, the same intersection pixel region, and the ratio of the two, so that discontinuous target video frames are removed, and finally the synthesized target video data is logically continuous on the picture, thereby further meeting the viewing requirements of users and improving the viewing experience of the users.
For example, in a 20 minute game tutorial video, the target video frame features a particular virtual character that the player is operating. But in the game teaching video lasting 20 minutes, the specific virtual character appears for one frame at 2:30 and then continues to appear at 10:50 to 11: 30. As the virtual character continuously appears in the range of 10:50 to 11:30, namely pictures in all target video frames in the time period have a certain degree of correlation, a plurality of target video frames continuously appearing in the range of 10:50 to 11:30 can be judged to be related video frames according to other same background pictures in the video frame pictures, so that a plurality of related video frames continuously appearing in the range of 10:50 to 11:30 can be screened out, and one target video frame appearing in the range of 2:30 can be abandoned.
And step S204, dividing the display interface into at least one display area according to at least one target video data.
Step S205, playing at least one target video data through at least one display area.
Further, since the number of the target video frames is large, the calculation amount for sequentially judging the pixel overlapping degrees between the target video frames is large in the embodiment, so that the video processing efficiency is reduced, and the calculation time of the intelligent display terminal is also increased. Therefore, in this embodiment, in step S202, based on the pixel overlapping ratios among the multiple target video frames, the screening of multiple related video frames from the multiple target video frames includes:
(1) based on the target video frames, a plurality of target video odd frames and a plurality of target video even frames are obtained.
(2) And screening a plurality of associated video frames from a plurality of target video odd frames or a plurality of target video even frames according to the association degree of the target video frames.
Specifically, in this embodiment, the target video frame is divided into two sets, that is, a target video odd frame set composed of a plurality of target video odd frames and a target video even frame set composed of a plurality of target video even frames. The plurality of associated video frames are then filtered based on a set of people therein.
That is, in the present embodiment, the screening of a plurality of associated video frames by odd or even frames significantly reduces the computational workload, and improves the screening efficiency.
At this time, step S203 synthesizes a plurality of related video frames into target video data based on the time stamps of the related video frames, adaptively changing to: and synthesizing a plurality of continuous target video frames corresponding to the plurality of associated video frames into target video data based on the associated video frame timestamps.
That is, since the associated video frames are both odd frames or even frames at this time, the initial frame and the end frame may be determined based on the associated video frame time stamp, then the corresponding initial target video frame and the end target video frame may be determined from the target video frame, and then a plurality of consecutive target video frames having time stamps between the initial target video frame and the end target video frame may be synthesized into the target video data.
Based on the first embodiment and the second embodiment of the above information point validity identification method of the present invention, a third embodiment of the information point validity identification method of the present invention is proposed. Referring to fig. 4, fig. 4 is a schematic flow chart illustrating a third embodiment of the method of the present invention.
In this embodiment, the video display method includes the following steps:
step S301, a plurality of target video frames with target video frame characteristics are screened from the video to be processed.
Step S302, a plurality of related video frames are screened out from a plurality of target video frames based on the pixel coincidence degree among the plurality of target video frames.
Step S303, synthesizing the plurality of associated video frames into target video data based on the timestamps of the associated video frames.
Step S304, dividing the display interface into at least one display area according to at least one target video data.
Step S305, a plurality of first background frames with the same gray value are screened from the plurality of target video frames.
Step S306, at least one target background frame is screened out from the plurality of first background frames, wherein the ratio of the pixel area of the second background frame is larger than a preset threshold.
In step S307, the second background frame is displayed as a background screen of the display area.
Specifically, in a game teaching video, particularly in an MOBA teaching game, when a specific virtual character operated by a certain player participates in defeating a specific virtual character operated by another player, the interface brightness and the interface ratio of the game often change. For example, the brightness of the virtual character becomes dark after death and the virtual character is displayed in an enlarged manner in the game. At this time, the video of the type can be selected as the first background frame according to the gray value of the video frame. And then screening out video frames with the proportion of the target picture to the whole frame being larger than a preset threshold value, such as half, from the first background frame. The video frames are used as the background of the display interface for displaying, so that the viewing experience of a user can be improved.
For example, in the case of a game teaching video, a screen on which an achievement such as a virtual character hit is obtained is displayed as a background screen on a display interface, and the viewing experience of the user can be further improved.
Step S308, at least one target video data is played through at least one display area.
Further, in order to reduce the amount of calculation, the step S305 of improving the video processing efficiency may still be used to screen the odd frames or the even frames. The step S305 of this embodiment may include:
(1) based on the target video frames, a plurality of target video odd frames and a plurality of target video even frames are obtained.
(2) And screening a plurality of first background frames with the same gray value from a plurality of target video odd frames or a plurality of target video even frames.
Referring to FIG. 5, FIG. 5 is a block diagram of a video display device according to a first embodiment of the present invention.
In this embodiment, a video display device includes:
a screening module 20; the video processing device is used for screening a plurality of target video frames with target video frame characteristics from video data to be processed;
a composition module 30 for composing the plurality of target video frames into at least one target video data;
the interface dividing module 40 is configured to divide the display interface into at least one display area according to at least one target video data, where the number of the display areas is greater than or equal to the number of the target video data;
a playing module 50, configured to play at least one target video data through at least one display area.
In the embodiment, all the target video frames including the target video frame characteristics in the video data are screened out and synthesized into the target video, and the target video is simultaneously played on the display interface, so that the time for watching uninteresting parts by a user is saved, the video playing efficiency is improved, and the watching experience of the user is improved.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a video presentation program is stored on the computer-readable storage medium, and when being executed by a processor, the video presentation program implements the steps of the video presentation method as described above. Therefore, a detailed description thereof will be omitted. In addition, the beneficial effects of the same method are not described in detail. For technical details not disclosed in embodiments of the computer-readable storage medium referred to in the present application, reference is made to the description of embodiments of the method of the present application. It is determined that, by way of example, the program instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where units illustrated as separate components may or may not be physically separate, and components illustrated as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus necessary general hardware, and may also be implemented by special hardware including special integrated circuits, special CPUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, the implementation of a software program is a more preferable embodiment for the present invention. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, where the computer software product is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a Read-only memory (ROM), a random-access memory (RAM), a magnetic disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.

Claims (10)

1. A method for video presentation, comprising the steps of:
screening a plurality of target video frames with target video frame characteristics from a video to be processed;
synthesizing the plurality of target video frames into at least one target video data;
dividing a display interface into at least one display area according to the at least one target video data, wherein the number of the display areas is greater than or equal to that of the target video data;
and playing the at least one target video data through at least one display area.
2. The video presentation method of claim 1, wherein said target video frame characteristics comprise a target picture pixel area;
screening out a plurality of target video frames with target video frame characteristics from a video to be processed, and the method comprises the following steps:
and screening a plurality of target video frames from the plurality of video frames, wherein each target video frame comprises a pixel area matched with the pixel area of the target picture.
3. The video presentation method according to claim 2, wherein the target video data comprises a plurality of consecutive target video frames;
the synthesizing the plurality of target video frames into at least one target video data comprises:
screening a plurality of associated video frames from the plurality of target video frames based on pixel overlap ratios among the plurality of target video frames;
synthesizing the plurality of associated video frames into the target video data based on the timestamps of the associated video frames.
4. The video presentation method of claim 3, wherein the screening out a plurality of associated video frames from the plurality of target video frames based on pixel overlap ratios between the plurality of target video frames comprises:
obtaining a plurality of target video odd frames and a plurality of target video even frames based on the plurality of target video frames;
screening out a plurality of associated video frames from the plurality of target video odd frames or the plurality of target video even frames according to the association degree of the target video frames;
the synthesizing the plurality of associated video frames into the target video data based on the timestamps of the associated video frames comprises:
synthesizing a plurality of consecutive target video frames corresponding to the plurality of associated video frames into the target video data based on the associated video frame timestamps.
5. The video presentation method of claim 1, wherein before playing the at least one target video data through the at least one display area, the method further comprises:
screening a plurality of first background frames with the same gray value from the plurality of target video frames;
screening at least one target background frame from the plurality of first background frames, wherein the ratio of the pixel areas of the second background frame is greater than a second preset threshold;
and displaying the second background frame as a background picture of the display area.
6. The method of claim 5, wherein the selecting the plurality of first background frames with the same gray value from the plurality of target video frames comprises:
obtaining a plurality of target video odd frames and a plurality of target video even frames based on the plurality of target video frames;
and screening a plurality of first background frames with the same gray value from a plurality of target video odd frames or a plurality of target video even frames.
7. The video presentation method according to any one of claims 1 to 6, wherein said playing said at least one target video data through at least one of said display areas comprises:
and playing the at least one target video data through at least one display area, wherein each target video frame in each target video data shows a preset duration.
8. A video presentation apparatus, comprising:
a screening module; the device comprises a display area and a display area, wherein the display area is used for screening a plurality of target video frames with target video frame characteristics from a video to be processed, and the number of the display areas is greater than or equal to that of the target video data;
a synthesizing module for synthesizing the plurality of target video frames into at least one target video data;
the interface dividing module is used for dividing a display interface into at least one display area according to the at least one target video data, wherein the number of the display areas is greater than or equal to that of the target video data;
and the playing module is used for playing the at least one target video data through the at least one display area.
9. The utility model provides an intelligent display terminal which characterized in that, intelligent display terminal includes: memory, processor and a video presentation program stored on the memory and executable on the processor, the video presentation program being configured to implement the steps of the video presentation method according to any one of claims 1 to 7.
10. A computer storage medium, characterized in that the computer storage medium has stored thereon a video presentation program which, when executed by a processor, implements the steps of the video presentation method according to any one of claims 1 to 7.
CN202011523205.XA 2020-12-21 2020-12-21 Video display method and device, intelligent display terminal and storage medium Pending CN112770167A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011523205.XA CN112770167A (en) 2020-12-21 2020-12-21 Video display method and device, intelligent display terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011523205.XA CN112770167A (en) 2020-12-21 2020-12-21 Video display method and device, intelligent display terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112770167A true CN112770167A (en) 2021-05-07

Family

ID=75695307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011523205.XA Pending CN112770167A (en) 2020-12-21 2020-12-21 Video display method and device, intelligent display terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112770167A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025141A (en) * 2022-01-05 2022-02-08 凯新创达(深圳)科技发展有限公司 Picture adjusting and playing method and picture adjusting and playing device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150010286A1 (en) * 2003-04-25 2015-01-08 Gopro, Inc. Encoding and decoding selectively retrievable representations of video content
CN105472440A (en) * 2015-12-30 2016-04-06 深圳Tcl数字技术有限公司 Video program preview method and smart television
CN107870959A (en) * 2016-09-23 2018-04-03 奥多比公司 Inquired about in response to video search and associated video scene is provided
CN109600653A (en) * 2017-09-30 2019-04-09 杭州海康威视数字技术股份有限公司 Playback method, device and the electronic equipment of video file
CN110166827A (en) * 2018-11-27 2019-08-23 深圳市腾讯信息技术有限公司 Determination method, apparatus, storage medium and the electronic device of video clip
CN110248117A (en) * 2019-06-25 2019-09-17 新华智云科技有限公司 Video mosaic generation method, device, electronic equipment and storage medium
CN110996138A (en) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 Video annotation method, device and storage medium
CN111741325A (en) * 2020-06-05 2020-10-02 咪咕视讯科技有限公司 Video playing method and device, electronic equipment and computer readable storage medium
US20200329217A1 (en) * 2017-12-15 2020-10-15 Zhejiang Dahua Technology Co., Ltd. Methods and systems for generating video synopsis

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150010286A1 (en) * 2003-04-25 2015-01-08 Gopro, Inc. Encoding and decoding selectively retrievable representations of video content
CN105472440A (en) * 2015-12-30 2016-04-06 深圳Tcl数字技术有限公司 Video program preview method and smart television
CN107870959A (en) * 2016-09-23 2018-04-03 奥多比公司 Inquired about in response to video search and associated video scene is provided
CN109600653A (en) * 2017-09-30 2019-04-09 杭州海康威视数字技术股份有限公司 Playback method, device and the electronic equipment of video file
US20200329217A1 (en) * 2017-12-15 2020-10-15 Zhejiang Dahua Technology Co., Ltd. Methods and systems for generating video synopsis
CN110166827A (en) * 2018-11-27 2019-08-23 深圳市腾讯信息技术有限公司 Determination method, apparatus, storage medium and the electronic device of video clip
CN110248117A (en) * 2019-06-25 2019-09-17 新华智云科技有限公司 Video mosaic generation method, device, electronic equipment and storage medium
CN110996138A (en) * 2019-12-17 2020-04-10 腾讯科技(深圳)有限公司 Video annotation method, device and storage medium
CN111741325A (en) * 2020-06-05 2020-10-02 咪咕视讯科技有限公司 Video playing method and device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025141A (en) * 2022-01-05 2022-02-08 凯新创达(深圳)科技发展有限公司 Picture adjusting and playing method and picture adjusting and playing device
CN114025141B (en) * 2022-01-05 2022-07-05 凯新创达(深圳)科技发展有限公司 Picture adjusting and playing method and picture adjusting and playing device

Similar Documents

Publication Publication Date Title
CN112004086B (en) Video data processing method and device
AU2021314277B2 (en) Interaction method and apparatus, and electronic device and computer-readable storage medium
EP4231650A1 (en) Picture display method and apparatus, and electronic device
CN113163141B (en) Display control method, display control device, television and computer-readable storage medium
CN104012059A (en) Direct link synchronization cummuication between co-processors
CN112770167A (en) Video display method and device, intelligent display terminal and storage medium
CN113115108A (en) Video processing method and computing device
CN113038232A (en) Video playing method, device, equipment, server and storage medium
CN112616078A (en) Screen projection processing method and device, electronic equipment and storage medium
CN113099300B (en) Program playing method, device, display terminal and storage medium
CN112905280B (en) Page display method, device, equipment and storage medium
CN115880193A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112418942A (en) Advertisement display method and device and electronic equipment
CN111880876B (en) Object processing method and related device
CN114495859B (en) Picture display method, device, display terminal and storage medium
CN113225615B (en) Television program playing method, terminal equipment, server and storage medium
CN114143602B (en) Panoramic picture playing method and device, panoramic playing server and storage medium
CN112911403B (en) Event analysis method and device, television and computer readable storage medium
CN112423037B (en) Television program playing method, device, terminal equipment and computer storage medium
CN113805828A (en) Display control method, display control device, display terminal and storage medium
CN113179451A (en) Television control method and device, television and computer readable storage medium
CN114493678A (en) Advertisement playing method, device, equipment and computer readable storage medium
CN117640605A (en) Resource display method and device, electronic equipment and storage medium
CN117195327A (en) Display control method, display control device, electronic equipment and storage medium
CN115731829A (en) Image quality adjusting method, storage medium and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210507

RJ01 Rejection of invention patent application after publication