CN110856033B - Object display method, device, terminal and storage medium - Google Patents

Object display method, device, terminal and storage medium Download PDF

Info

Publication number
CN110856033B
CN110856033B CN201911234087.8A CN201911234087A CN110856033B CN 110856033 B CN110856033 B CN 110856033B CN 201911234087 A CN201911234087 A CN 201911234087A CN 110856033 B CN110856033 B CN 110856033B
Authority
CN
China
Prior art keywords
video
processed
displayed
content
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911234087.8A
Other languages
Chinese (zh)
Other versions
CN110856033A (en
Inventor
滕腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mihoyo Technology Shanghai Co ltd
Original Assignee
Mihoyo Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mihoyo Technology Shanghai Co ltd filed Critical Mihoyo Technology Shanghai Co ltd
Priority to CN201911234087.8A priority Critical patent/CN110856033B/en
Publication of CN110856033A publication Critical patent/CN110856033A/en
Application granted granted Critical
Publication of CN110856033B publication Critical patent/CN110856033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Abstract

The embodiment of the invention discloses an object display method, an object display device, a terminal and a storage medium. The method comprises the following steps: when a trigger event of a display object is monitored, acquiring a video to be displayed corresponding to the trigger event and a preset display area of the video to be displayed in a current video picture; and displaying the video to be displayed on a preset display area, wherein the foreground content of the video to be displayed comprises an object to be displayed and the background content is transparent content. The technical scheme of the embodiment of the invention can solve the problem that the object to be displayed in the current video picture can not be clearly displayed, and the effect of saving the memory and displaying a string of coherent contents of the object to be displayed is achieved because the video to be displayed is streaming media, thereby improving the experience of the user to a greater extent.

Description

Object display method, device, terminal and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computer application, in particular to an object display method, an object display device, a terminal and a storage medium.
Background
During video playing, some objects in the video frame cannot be clearly displayed due to small volume and/or outside the display screen. For example, in a game video picture, some pets have small volume, small skill and/or are outside a display screen, so that the performance of the pets during release of the bouquet is insufficient, and the performance effect of the skill is to be enhanced and rendered.
Disclosure of Invention
The embodiment of the invention provides an object display method, an object display device, a terminal and a storage medium, and solves the problem that some objects in a video picture cannot be clearly displayed.
In a first aspect, an embodiment of the present invention provides an object display method, which may include:
when a trigger event of a display object is monitored, acquiring a video to be displayed corresponding to the trigger event and a preset display area of the video to be displayed in a current video picture;
and displaying the video to be displayed on a preset display area, wherein the foreground content of the video to be displayed comprises an object to be displayed and the background content is transparent content.
Optionally, the obtaining of the video to be displayed corresponding to the trigger event may include:
acquiring a to-be-processed video corresponding to a trigger event, reading to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video, and if the to-be-processed pixel point is judged to belong to the background content of the to-be-processed video according to the to-be-processed pixel information, adjusting the transparency of the to-be-processed pixel information;
otherwise, acquiring an original video of the video to be processed, and adjusting the pixel information to be processed according to the target pixel information of the target pixel point corresponding to the pixel point to be processed in the original video;
and constructing the video to be displayed according to the adjustment result of the video to be processed.
Optionally, the video to be processed may be obtained in advance through the following steps:
the method comprises the steps of obtaining an original video, extracting foreground content and background content of the original video, and converting the original video into a to-be-processed video according to an extraction result, wherein foreground pixel information of each foreground pixel point in the foreground content of the to-be-processed video is a preset foreground color, and background pixel information of each background pixel point in the background content of the to-be-processed video is a preset background color.
Optionally, extracting foreground content and background content of the original video may include:
performing histogram statistics on original pixel information of each original pixel point in an original video, and performing edge detection on the original video;
and extracting foreground content and background content of the original video according to the statistical result and the detection result.
Optionally, the obtaining of the to-be-processed video corresponding to the trigger event may include:
and acquiring a video to be processed corresponding to the trigger event, decomposing the video to be processed into a plurality of frames of pictures to be processed, and updating the video to be processed according to the plurality of frames of pictures to be processed.
Optionally, reading to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video may include:
and reading the to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video through a preset shader.
Optionally, the video to be displayed may be obtained in advance through the following steps:
the method comprises the steps of obtaining a video to be adjusted of an object to be displayed, adjusting the transparency of each pixel point to be adjusted in the background content of the video to be adjusted, and constructing the video to be displayed according to an adjustment result.
In a second aspect, an embodiment of the present invention further provides an object display apparatus, where the apparatus may include:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a video to be displayed corresponding to a trigger event and a preset display area of the video to be displayed in a current video picture when the trigger event of a display object is monitored;
the display module is used for displaying the video to be displayed on a preset display area, wherein the foreground content of the video to be displayed comprises an object to be displayed, and the background content is transparent content.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal may include:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the object display method provided by any embodiment of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the object display method provided in any embodiment of the present invention.
According to the technical scheme of the embodiment of the invention, when the trigger event of the display object is monitored, the to-be-displayed video corresponding to the trigger event and the preset display area of the to-be-displayed video in the current video picture are obtained, so that when the to-be-displayed video is displayed in the preset display area, a close-up display is provided for the to-be-displayed object because the foreground content of the to-be-displayed video comprises the to-be-displayed object, and excessive current video picture cannot be blocked because the background content of the to-be-displayed video is transparent. The technical scheme solves the problem that the object to be displayed in the current video picture cannot be clearly displayed, and meanwhile, the effect of saving the memory and displaying a string of consecutive contents of the object to be displayed is achieved because the video to be displayed is streaming media, so that the experience of a user is improved to a greater extent.
Drawings
FIG. 1 is a flowchart illustrating an object display method according to a first embodiment of the present invention;
FIG. 2a is a schematic diagram of an original video in an object display method according to a first embodiment of the present invention;
fig. 2b is a schematic diagram of a video to be processed in an object display method according to a first embodiment of the present invention;
fig. 2c is a schematic diagram of a video to be displayed in an object display method according to a first embodiment of the present invention;
FIG. 3 is a block diagram of an object display apparatus according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal in a third embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an object display method according to an embodiment of the present invention. The embodiment can be applied to the condition of clearly displaying the object to be displayed in the current video picture, and is particularly suitable for the condition of clearly displaying the object to be displayed in the current video picture in a transparent video mode. The method can be executed by the object display device provided by the embodiment of the invention, the device can be realized by software and/or hardware, and the device can be integrated on various user terminals or servers.
Referring to fig. 1, the method of the embodiment of the present invention specifically includes the following steps:
s110, when a trigger event of a display object is monitored, a video to be displayed corresponding to the trigger event and a preset display area of the video to be displayed in a current video picture are obtained.
When the current video picture is being played and a trigger event of the display object is monitored, the object to be displayed corresponding to the trigger event can be determined according to the trigger event, and the video to be displayed of the object to be displayed and the preset display area of the video to be displayed in the current video picture are obtained. The object to be displayed may be an object with a small volume in the current video picture, or may be an object outside the current video picture; the video to be displayed can provide a display close-up for the object to be displayed so as to clearly display the object to be displayed. On the basis, on one hand, as the video to be displayed is a streaming media, when the video to be displayed is loaded into the memory, all video frames do not need to be loaded, and only a few frames of video frames which can be smoothly played by the video to be displayed need to be sequentially loaded, so that the memory can be saved; on the other hand, as the video to be displayed is composed of a plurality of frames of pictures to be displayed, a string of continuous contents of the object to be displayed can be displayed instead of a single content, which can improve the user experience.
On the basis, the video to be displayed can be obtained in advance in multiple modes, and an optional technical scheme is adopted to obtain the video to be adjusted of the object to be displayed, adjust the transparency of each pixel point to be adjusted in the background content of the video to be adjusted, and construct the video to be displayed according to the adjustment result.
The video to be adjusted is an original close-up video of the object to be displayed without any processing, and the problem that the object to be displayed cannot be clearly displayed can be solved if the video to be adjusted is directly displayed on the current video picture, but a certain display area of the current video picture can be blocked; moreover, if the content including the object to be displayed in the video to be adjusted is taken as foreground content and the content not including the object to be displayed is taken as background content, the foreground content is the key content for clearly displaying the object to be displayed. Therefore, the transparency of each pixel point to be adjusted in the background content of the video to be adjusted can be adjusted, for example, the transparency is reduced, even the transparency is set to be full transparency, and the video to be displayed is constructed according to the adjustment result, so that the video to be displayed only can display the object to be displayed on the current video picture and can not display the content except the object to be displayed, and the content blocked in the current video picture can be reduced as much as possible.
And S120, displaying the video to be displayed on a preset display area, wherein the foreground content of the video to be displayed comprises an object to be displayed and the background content is transparent content.
The video to be displayed is displayed on the preset display area, and the preset display area of each video to be displayed can be the same or different. Aiming at displaying a video to be displayed in a preset display area, namely, overlapping the video to be displayed on the preset display area of a current video picture, the area outside the preset display area of the current video picture is only used for displaying the current video picture, the preset display area is used for displaying the current video picture at the bottom layer and displaying the video to be displayed at the top layer, and the background content of the video to be displayed cannot shield the current video picture because of being transparent.
According to the technical scheme of the embodiment of the invention, when the trigger event of the display object is monitored, the to-be-displayed video corresponding to the trigger event and the preset display area of the to-be-displayed video in the current video picture are obtained, so that when the to-be-displayed video is displayed in the preset display area, a close-up display is provided for the to-be-displayed object because the foreground content of the to-be-displayed video comprises the to-be-displayed object, and excessive current video picture cannot be blocked because the background content of the to-be-displayed video is transparent. The technical scheme solves the problem that the object to be displayed in the current video picture cannot be clearly displayed, and meanwhile, the effect of saving the memory and displaying a string of consecutive contents of the object to be displayed is achieved because the video to be displayed is streaming media, so that the experience of a user is improved to a greater extent.
A selectable technical solution is to obtain a video to be displayed corresponding to a trigger event, and specifically may include: acquiring a to-be-processed video corresponding to a trigger event, reading to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video, and if the to-be-processed pixel point is judged to belong to the background content of the to-be-processed video according to the to-be-processed pixel information, adjusting the transparency of the to-be-processed pixel information; otherwise, acquiring an original video of the video to be processed, and adjusting the pixel information to be processed according to the target pixel information of the target pixel point corresponding to the pixel point to be processed in the original video; and constructing the video to be displayed according to the adjustment result of the video to be processed.
The trigger event and the object to be displayed may have a one-to-one correspondence relationship, the object to be displayed and the video to be processed may have a one-to-one correspondence relationship, and the object to be displayed and the original video may also have a one-to-one correspondence relationship, and the original video may be an original close-up video of the object to be displayed without any processing. Therefore, after the to-be-processed video corresponding to the trigger event is acquired, the to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video can be read, for example, the to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video is read through a preset shader, and whether the to-be-processed pixel point belongs to foreground content or background content of the to-be-processed video can be judged according to the to-be-processed pixel information, wherein the foreground content of the to-be-processed video includes an object to be displayed and the background content does not include the object to be displayed.
On the basis, the transparency of the to-be-processed pixel points belonging to the background content in the to-be-processed video can be adjusted to obtain the transparent background content. Meanwhile, aiming at the pixel points to be processed belonging to the foreground content in the video to be processed, the original video of the video to be processed is obtained by taking the object to be processed as a medium, the pixel information to be processed is adjusted according to the target pixel information of the target pixel points corresponding to the pixel points to be processed in the original video, and for example, the color to be processed and/or the transparency to be processed of the pixel information to be processed can be adjusted according to the target color and/or the target transparency of the target pixel information. In this way, the video to be displayed can be constructed according to the adjustment result of the video to be processed, that is, the adjusted video to be processed is taken as the video to be displayed.
It should be noted that, since the to-be-processed video may be composed of multiple frames of to-be-processed pictures, after the to-be-processed video corresponding to the trigger event is obtained, the to-be-processed video may be decomposed into multiple frames of to-be-processed pictures, the to-be-processed pixel information of each to-be-processed pixel point in each frame of to-be-processed picture is read with the to-be-processed picture as a unit, and corresponding processing is performed on the to-be-processed pixel point by determining whether each to-be-processed pixel point in each frame of to-be-processed picture is a pixel point of foreground content or background content.
The video to be processed can be obtained in advance through the following steps: the method comprises the steps of obtaining an original video, extracting foreground content and background content of the original video, and converting the original video into a to-be-processed video according to an extraction result, wherein foreground pixel information of each foreground pixel point in the foreground content of the to-be-processed video is a preset foreground color, and background pixel information of each background pixel point in the background content of the to-be-processed video is a preset background color.
The original video may be an original close-up video without any processing of an object to be displayed, the video to be processed is equivalent to a two-color video, foreground content and background content of the video are different colors, illustratively, foreground pixel information of each foreground pixel point in the foreground content of the two-color video is white, and background pixel information of each background pixel point in the background content is black. The method for converting the original video into the bicolor video has the advantages that when the trigger event of the display object is monitored, the foreground content and the background content of the bicolor video can be simply divided according to the acquired color information of the bicolor video, so that the foreground content and the background content can be correspondingly processed to obtain the video to be displayed.
It should be noted that there are many ways to extract foreground content and background content of an original video, and an optional way, considering that color information of an object to be displayed in each original video tends to be consistent, if the color information of the object to be displayed is mainly red and the color information of the object to be displayed is mainly green, histogram statistics can be performed on the original pixel information of each original pixel point in the original video to obtain a statistical result of the color information, and the pixel point corresponding to the color information occupying the main proportion is taken as a foreground pixel point of the foreground content; on the basis, edge detection can be performed on the original video, and the original video can be better segmented by combining the histogram statistical result and the edge detection result. For example, if the color information of the object to be displayed is mainly red, the pixel point of which the color information is red may be extracted first, and then the original video segmentation is implemented by combining the edge delineation result of the object to be displayed.
In order to better understand the specific implementation process of the above steps, the following takes a pet in the game video screen of the background art as an example to describe the object display method of the present embodiment. For example, as shown in fig. 2a-2c, each pet skill corresponds to a to-be-processed video (as shown in fig. 2 b), which is a two-color video converted from the original video (as shown in fig. 2 a), wherein each foreground pixel in the foreground content is a predetermined foreground color (e.g., white) and each background pixel in the background content is a predetermined background color (e.g., black).
On the basis, when a trigger event for displaying the pet skill is monitored during the running of the game, a to-be-processed video of the pet skill to be displayed is obtained, and the to-be-processed video is decomposed into a plurality of frames of to-be-processed pictures; for each frame of picture to be processed, reading color information of each pixel point to be processed in the video to be processed through a preset shader so as to divide the video to be processed into foreground content (namely pet skill to be displayed) and background content (such as various ornaments, such as feathers, flowing lines and the like); setting each foreground pixel point in the foreground content as a target pixel point at a corresponding position in the original video, and setting the transparency of each background pixel point in the background content as full transparency, so as to obtain a video to be displayed with transparent background (as shown in fig. 2 c); further, the to-be-displayed videos are displayed on a preset display area of a game video interface, under the premise that the pet is normally used, the to-be-displayed videos with transparent background contents can provide an animation close-up for the skill of the pet to be displayed on the game video interface, so that the effect of displaying the pet large move through the playing of the transparent videos is achieved, the expressive force of the pet during the releasing period of the large move is improved, and the expressive effect of the pet skill is enhanced. That is, an animation video through which the coherent content of the pet during the release of the bouquet is presented on the game video screen, and the periphery of the animation video is transparent but the pet is not, which can achieve the effect of clearly presenting both the skill of the pet and the game video screen.
Example two
Fig. 3 is a block diagram of an object display apparatus according to a second embodiment of the present invention, which is configured to execute the object display method according to any of the above embodiments. The object display method of the present invention is not limited to the foregoing embodiments, and the embodiments of the object display method may be referred to for details that are not described in detail in the embodiments of the object display apparatus. Referring to fig. 3, the apparatus may specifically include: an acquisition module 210 and a display module 220.
The acquiring module 210 is configured to, when a trigger event of a display object is monitored, acquire a video to be displayed corresponding to the trigger event and a preset display area of the video to be displayed in a current video frame;
the display module 220 is configured to display a video to be displayed on a preset display area, where foreground content of the video to be displayed includes an object to be displayed and background content is transparent content.
Optionally, the obtaining module 210 may specifically include:
the acquisition unit is used for acquiring a to-be-processed video corresponding to the trigger event, reading to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video, and adjusting the transparency of the to-be-processed pixel information if the to-be-processed pixel point is judged to belong to the background content of the to-be-processed video according to the to-be-processed pixel information;
the adjusting unit is used for obtaining an original video of the video to be processed and adjusting the pixel information to be processed according to the target pixel information of the target pixel point corresponding to the pixel point to be processed in the original video if the original video is not the original video;
and the construction unit is used for constructing the video to be displayed according to the adjustment result of the video to be processed.
Optionally, on the basis of the above apparatus, the apparatus may further include:
the conversion module is used for acquiring an original video, extracting foreground content and background content of the original video, and converting the original video into a to-be-processed video according to an extraction result, wherein foreground pixel information of each foreground pixel point in the foreground content of the to-be-processed video is a preset foreground color, and background pixel information of each background pixel point in the background content of the to-be-processed video is a preset background color.
Optionally, the conversion module may be specifically configured to:
performing histogram statistics on original pixel information of each original pixel point in an original video, and performing edge detection on the original video;
and extracting foreground content and background content of the original video according to the statistical result and the detection result.
Optionally, the obtaining unit may specifically include:
and the updating subunit is used for acquiring the video to be processed corresponding to the trigger event, decomposing the video to be processed into a plurality of frames of pictures to be processed, and updating the video to be processed according to the plurality of frames of pictures to be processed.
Optionally, the obtaining unit may specifically include:
and the reading subunit is used for reading the to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video through a preset shader.
Optionally, on the basis of the above apparatus, the apparatus may further include:
and the adjusting module is used for acquiring a video to be adjusted of the object to be displayed, adjusting the transparency of each pixel point to be adjusted in the background content of the video to be adjusted, and constructing the video to be displayed according to the adjusting result.
In the object display device provided by the second embodiment of the present invention, through the mutual cooperation between the obtaining module and the display module, when a trigger event of the display object is monitored, the to-be-displayed video corresponding to the trigger event and the preset display area of the to-be-displayed video in the current video picture are obtained, so that when the to-be-displayed video is displayed in the preset display area, a close-up display is provided for the to-be-displayed object because the foreground content of the to-be-displayed video includes the to-be-displayed object, and an excessive current video picture cannot be blocked because the background content of the to-be-displayed video is transparent. The device solves the problem that the object to be displayed in the current video picture can not be clearly displayed, and simultaneously, the effect of saving the memory and displaying a string of consecutive contents of the object to be displayed is achieved because the video to be displayed is streaming media, so that the experience of a user is improved to a greater extent.
The object display device provided by the embodiment of the invention can execute the object display method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the object display apparatus, the included units and modules are only divided according to the functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
EXAMPLE III
Fig. 4 is a schematic structural diagram of a terminal according to a third embodiment of the present invention, as shown in fig. 4, the terminal includes a memory 310, a processor 320, an input device 330, and an output device 340. The number of the processors 320 in the terminal may be one or more, and one processor 320 is taken as an example in fig. 4; the memory 310, processor 320, input device 330 and output device 340 in the terminal may be connected by a bus or other means, such as by bus 350 in fig. 4.
The memory 310, which is a computer-readable storage medium, may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the object display method in the embodiment of the present invention (for example, the acquisition module 210 and the display module 220 in the object display apparatus). The processor 320 executes various functional applications of the terminal and data processing, i.e., implements the object display method described above, by executing software programs, instructions, and modules stored in the memory 310.
The memory 310 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 310 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 310 may further include memory located remotely from processor 320, which may be connected to devices through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 330 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the device. The output device 340 may include a display device such as a display screen.
Example four
A fourth embodiment of the present invention provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for displaying an object, the method including:
when a trigger event of a display object is monitored, acquiring a video to be displayed corresponding to the trigger event and a preset display area of the video to be displayed in a current video picture;
and displaying the video to be displayed on a preset display area, wherein the foreground content of the video to be displayed comprises an object to be displayed and the background content is transparent content.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the object display method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. With this understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. An object display method, comprising:
continuously displaying the current video picture;
when a trigger event of a display object is monitored, acquiring a video to be displayed corresponding to the trigger event and a preset display area of the video to be displayed in the current video picture;
displaying the video to be displayed on the preset display area, wherein the foreground content of the video to be displayed comprises an object to be displayed and the background content is transparent content;
the acquiring of the video to be displayed corresponding to the trigger event includes:
acquiring a to-be-processed video corresponding to the trigger event, reading to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video, and if the to-be-processed pixel point is judged to belong to the background content of the to-be-processed video according to the to-be-processed pixel information, adjusting the transparency of the to-be-processed pixel information;
otherwise, acquiring an original video of the video to be processed, and adjusting the pixel information to be processed according to target pixel information of a target pixel point corresponding to the pixel point to be processed in the original video;
and constructing a video to be displayed according to the adjustment result of the video to be processed.
2. The method according to claim 1, wherein the video to be processed is obtained in advance by:
the method comprises the steps of obtaining an original video, extracting foreground content and background content of the original video, and converting the original video into a to-be-processed video according to an extraction result, wherein foreground pixel information of each foreground pixel point in the foreground content of the to-be-processed video is a preset foreground color, and background pixel information of each background pixel point in the background content of the to-be-processed video is a preset background color.
3. The method of claim 2, wherein the extracting the foreground content and the background content of the original video comprises:
performing histogram statistics on original pixel information of each original pixel point in the original video, and performing edge detection on the original video;
and extracting foreground content and background content of the original video according to the statistical result and the detection result.
4. The method according to claim 1, wherein the obtaining the to-be-processed video corresponding to the trigger event comprises:
and acquiring a video to be processed corresponding to the trigger event, decomposing the video to be processed into a plurality of frames of pictures to be processed, and updating the video to be processed according to the plurality of frames of pictures to be processed.
5. The method according to claim 1, wherein the reading the to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video includes: and reading the pixel information to be processed of each pixel point to be processed in the video to be processed through a preset shader.
6. The method according to claim 1, wherein the video to be displayed is obtained in advance by:
and acquiring the video to be adjusted of the object to be displayed, adjusting the transparency of each pixel point to be adjusted in the background content of the video to be adjusted, and constructing the video to be displayed according to the adjustment result.
7. An object display apparatus, comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for continuously displaying a current video picture, and acquiring a video to be displayed corresponding to a trigger event and a preset display area of the video to be displayed in the current video picture when the trigger event of a display object is monitored;
the display module is used for displaying the video to be displayed on the preset display area, wherein the foreground content of the video to be displayed comprises an object to be displayed and the background content is transparent content;
wherein, the obtaining module includes:
the acquisition unit is used for acquiring a to-be-processed video corresponding to the trigger event, reading to-be-processed pixel information of each to-be-processed pixel point in the to-be-processed video, and if the to-be-processed pixel point is judged to belong to the background content of the to-be-processed video according to the to-be-processed pixel information, adjusting the transparency of the to-be-processed pixel information;
the adjusting unit is used for obtaining an original video of the video to be processed and adjusting the pixel information to be processed according to target pixel information of a target pixel point corresponding to the pixel point to be processed in the original video if the original video is not the original video;
and the construction unit is used for constructing the video to be displayed according to the adjustment result of the video to be processed.
8. A terminal, characterized in that the terminal comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the object display method of any one of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out an object display method according to any one of claims 1 to 6.
CN201911234087.8A 2019-12-05 2019-12-05 Object display method, device, terminal and storage medium Active CN110856033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911234087.8A CN110856033B (en) 2019-12-05 2019-12-05 Object display method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911234087.8A CN110856033B (en) 2019-12-05 2019-12-05 Object display method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110856033A CN110856033A (en) 2020-02-28
CN110856033B true CN110856033B (en) 2021-12-10

Family

ID=69608004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911234087.8A Active CN110856033B (en) 2019-12-05 2019-12-05 Object display method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110856033B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111355991B (en) * 2020-03-13 2022-03-25 Tcl移动通信科技(宁波)有限公司 Video playing method and device, storage medium and mobile terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101371273A (en) * 2005-12-30 2009-02-18 意大利电信股份公司 Video sequence partition
CN102800045A (en) * 2012-07-12 2012-11-28 北京小米科技有限责任公司 Image processing method and device
CN103714314A (en) * 2013-12-06 2014-04-09 安徽大学 Television video station caption identification method combining edge and color information
CN106355153A (en) * 2016-08-31 2017-01-25 上海新镜科技有限公司 Virtual object display method, device and system based on augmented reality
CN107920202A (en) * 2017-11-15 2018-04-17 阿里巴巴集团控股有限公司 Method for processing video frequency, device and electronic equipment based on augmented reality
CN109219955A (en) * 2016-05-31 2019-01-15 微软技术许可有限责任公司 Video is pressed into

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8941588B2 (en) * 2008-04-24 2015-01-27 Oblong Industries, Inc. Fast fingertip detection for initializing a vision-based hand tracker
US8532336B2 (en) * 2010-08-17 2013-09-10 International Business Machines Corporation Multi-mode video event indexing
US9317908B2 (en) * 2012-06-29 2016-04-19 Behavioral Recognition System, Inc. Automatic gain control filter in a video analysis system
US10134114B2 (en) * 2016-09-20 2018-11-20 Gopro, Inc. Apparatus and methods for video image post-processing for segmentation-based interpolation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101371273A (en) * 2005-12-30 2009-02-18 意大利电信股份公司 Video sequence partition
CN102800045A (en) * 2012-07-12 2012-11-28 北京小米科技有限责任公司 Image processing method and device
CN103714314A (en) * 2013-12-06 2014-04-09 安徽大学 Television video station caption identification method combining edge and color information
CN109219955A (en) * 2016-05-31 2019-01-15 微软技术许可有限责任公司 Video is pressed into
CN106355153A (en) * 2016-08-31 2017-01-25 上海新镜科技有限公司 Virtual object display method, device and system based on augmented reality
CN107920202A (en) * 2017-11-15 2018-04-17 阿里巴巴集团控股有限公司 Method for processing video frequency, device and electronic equipment based on augmented reality

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《街机游戏第二期 美少女战士 月野兔一命通关》;ローズバレット;《哔哩哔哩弹幕网》;20180603;第1-5页 *
ローズバレット.《街机游戏第二期 美少女战士 月野兔一命通关》.《哔哩哔哩弹幕网》.2018, *

Also Published As

Publication number Publication date
CN110856033A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
CN107025457B (en) Image processing method and device
CN104244024B (en) Video cover generation method and device and terminal
WO2021238943A1 (en) Gif picture generation method and apparatus, and electronic device
CN111277910B (en) Bullet screen display method and device, electronic equipment and storage medium
CN106021421B (en) method and device for accelerating webpage rendering
CN107295352B (en) Video compression method, device, equipment and storage medium
CN107943363B (en) Background image configuration method and device, interactive intelligent panel and storage medium
US11347792B2 (en) Video abstract generating method, apparatus, and storage medium
US9641768B2 (en) Filter realization method and apparatus of camera application
CN103927722A (en) Implementation method and device for augmented reality
CN114245028A (en) Image display method and device, electronic equipment and storage medium
CN110830787A (en) Method and device for detecting screen-patterned image
CN110856033B (en) Object display method, device, terminal and storage medium
CN111131910B (en) Bullet screen implementation method and device, electronic equipment and readable storage medium
CN108668161A (en) Method of video image processing, computer installation and computer readable storage medium
WO2019015411A1 (en) Screen recording method and apparatus, and electronic device
WO2023227045A1 (en) Display object determination method and apparatus, electronic device, and storage medium
CN113407436A (en) Play component compatibility detection method and device, computer equipment and storage medium
CN111258434A (en) Method, device, equipment and storage medium for inserting pictures into chat interface
CN112929682B (en) Method, device and system for transparently processing image background and electronic equipment
CN106648112A (en) Somatosensory action recognition method
CN113891136A (en) Video playing method and device, electronic equipment and storage medium
CN104837005B (en) Method and its device that a kind of 2D videos are played with 3D Video Quality Metrics
CN113596354A (en) Image processing method, image processing device, computer equipment and storage medium
CN111008062A (en) Interface setting method, device, equipment and medium for application program APP

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant