CN115996302A - Method, device and equipment for smoothing signal image of strip screen on digital spliced wall - Google Patents

Method, device and equipment for smoothing signal image of strip screen on digital spliced wall Download PDF

Info

Publication number
CN115996302A
CN115996302A CN202211501367.2A CN202211501367A CN115996302A CN 115996302 A CN115996302 A CN 115996302A CN 202211501367 A CN202211501367 A CN 202211501367A CN 115996302 A CN115996302 A CN 115996302A
Authority
CN
China
Prior art keywords
text
texture object
rendering
image
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211501367.2A
Other languages
Chinese (zh)
Inventor
陈泓坤
甄海华
郭玲
吴细平
郭兰芳
覃俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vtron Group Co Ltd
Original Assignee
Vtron Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vtron Group Co Ltd filed Critical Vtron Group Co Ltd
Priority to CN202211501367.2A priority Critical patent/CN115996302A/en
Publication of CN115996302A publication Critical patent/CN115996302A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Controls And Circuits For Display Device (AREA)

Abstract

The invention relates to the technical field of mobile communication, and discloses a method, a device and equipment for smoothing a signal image of a strip screen on a digital spliced wall. According to the invention, an HTML5 page for recording background element information and a JSON file for recording text element information are generated according to display parameter configuration data of a bar screen signal, background rendering is carried out by utilizing a browser based on the HTML5 page, text rendering is carried out by utilizing a rendering engine interface based on the JSON file, an obtained background image and a text image sequence are combined to obtain a video file, the whole video file is converted and distributed, and finally a large screen processor controls a corresponding screen to display the corresponding bar screen signal. The invention can realize the superposition of the bar screen signals without bar screen hardware, realize the smooth display of the bar screen signals and avoid bad experience caused by the blocking and jumping of the text movement in the bar screen signals.

Description

Method, device and equipment for smoothing signal image of strip screen on digital spliced wall
Technical Field
The present invention relates to the field of mobile communications technologies, and in particular, to a method, an apparatus, and a device for smoothing a signal image of a strip screen on a digital spliced wall.
Background
The existing digital splice wall generally comprises a main screen and a strip-shaped LED auxiliary screen (hereinafter referred to as strip screen for short) above the main screen. In daily use, various application signal images are displayed through the main screen, and text signal images, such as conference titles, welcome words, and the like, are displayed through the bar screen. The image source of these signals is typically an application running on a PC (personal computer), and the image is output through a PC graphics card or network, then connected to a large screen processor, and then displayed on a display unit of the digital mosaic wall. If the content (the signal content is background and text) displayed on the bar screen is displayed on the main screen in a superimposed manner, the bar screen equipment can be reduced, so that the system hardware cost is reduced, and the product competitiveness is improved.
At present, the background and the characters are used as signals to be superimposed on a main screen, and one popular scheme is as follows: an HTML5 webpage is added in a computer program operated by a PC, background elements and text elements are added in the webpage, and each frame of image displayed on the webpage is further converted into a video stream in real time and sent to a main screen so as to display the video stream. The background element comprises static or dynamic pictures, videos and the like, and the text element comprises text content, setting information of the text moving direction, speed and other effects.
The scheme is that the browser is utilized to render the HTML page, including the background and the text. After each frame is rendered, the images of the corresponding frames are encoded and distributed, and the large screen processor decodes the images of the corresponding frames and sends the decoded images to the large screen for display. However, in the video stream generated by the scheme, the characters do not move at a constant speed on a large screen, but a clamping feeling and a jumping feeling can occur, so that a better display effect cannot be achieved.
Disclosure of Invention
The invention provides a method, a device and equipment for smoothing a bar screen signal image on a digital spliced wall, which solve the technical problem that characters are not smoothly displayed in a generated video stream when a bar screen signal is overlapped on a main screen in the prior art.
The first aspect of the invention provides a method for smoothing a signal image of a strip screen on a digital spliced wall, which comprises the following steps:
acquiring display parameter configuration data of a bar screen signal, and generating an HTML5 page for recording background element information and a JSON file for recording text element information according to the display parameter configuration data;
loading the HTML5 page by using a browser, rendering, storing the acquired rendering images of each frame as background images of the bar screen signals, and establishing a mapping relation between the time stamp of the acquired rendering images and the corresponding background images;
Rendering the text element information in the JSON file by using a rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of the time stamp, and storing the text image sequence; a mapping relation is established between each text image in the text image sequence and a timestamp when the corresponding text image is generated;
merging each text image in the text image sequence with a background image corresponding to the timestamp to generate a corresponding video file;
and converting the video file and distributing the video file to a large-screen processor so that the large-screen processor controls the corresponding screen to display corresponding bar screen signals.
According to one implementation manner of the first aspect of the present invention, the loading and rendering the HTML5 page with a browser includes:
and loading the HTML5 page by using a CEF interface of the browser, and rendering by adopting an offline rendering mode.
According to one implementation manner of the first aspect of the present invention, the text element information includes a text parameter and a text display area parameter; the method for rendering the text element information in the JSON file by using the rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of the time stamp comprises the following steps:
Creating a transparent first texture object according to the character display area parameters;
creating a transparent second texture object according to the text parameters and rendering text to the second texture object to obtain a rendered texture object;
dividing the time period of the rendered texture object displayed in the first texture object into a plurality of time points on average, and calculating the position of the rendered texture object in the first texture object when each time point is calculated to obtain a position calculation result corresponding to each time point;
and superposing the rendered texture object on the first texture object according to the position calculation result to obtain a text image corresponding to each time point, establishing a mapping relation between the time stamp of the text image generation and the corresponding text image, and splicing the text images into a text image sequence according to the sequence from the small time stamp to the large time stamp.
According to an implementation manner of the first aspect of the present invention, the dividing, by average, a time period in which the rendered texture object is displayed in the first texture object into a plurality of time points includes:
and determining the time interval of two adjacent time points according to the frame rate of 60pfs, and determining each time point in the time period according to the time interval.
According to an implementation manner of the first aspect of the present invention, the calculating the position of the rendered texture object in the first texture object at each point in time includes:
calculating the position of the rendered texture object in the first texture object according to the following formula:
Figure BDA0003967792740000031
wherein Texture1_x represents the abscissa of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, texture1_y represents the ordinate of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, and w Texture0 For the pixel width, w, of the first texture object Texture1 For the pixel width, t, of the rendered texture object %totalTime For the corresponding time of the current time point in the time period, total time is the duration of the time period, texture0_y is the ordinate, h of the first Texture object Texture0 For the pixel height, h, of the first texture object Texture1 Is the pixel height of the rendered texture object.
According to one implementation manner of the first aspect of the present invention, the overlaying the rendered texture object on the first texture object according to the position calculation result, to obtain a text image corresponding to each time point, includes:
Processing the first texture object to restore the first texture object to be transparent texture after obtaining the text image corresponding to one time point;
and superposing the rendered texture object to the processed first texture object according to the position calculation result corresponding to the next time point to obtain a corresponding text image.
According to one implementation manner of the first aspect of the present invention, the merging each text image in the text image sequence with a background image corresponding to a timestamp to generate a corresponding video file includes:
the video file is generated at a frame rate of 60 pfs.
The second aspect of the present invention provides a device for smoothing a signal image of a strip screen on a digital spliced wall, comprising:
the configuration module is used for acquiring display parameter configuration data of the bar screen signal, and generating an HTML5 page for recording background element information and a JSON file for recording text element information according to the display parameter configuration data;
the background rendering module is used for loading the HTML5 page by using a browser and rendering, storing the acquired rendering images of each frame as background images of the bar screen signals, and establishing a mapping relation between the time stamp of the acquired rendering images and the corresponding background images;
The text rendering module is used for rendering text element information in the JSON file by utilizing a rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of time stamps, and storing the text image sequence; a mapping relation is established between each text image in the text image sequence and a timestamp when the corresponding text image is generated;
the synthesizing module is used for merging each text image in the text image sequence with the background image corresponding to the timestamp to generate a corresponding video file;
and the distribution module is used for diverting the video file and distributing the video file to the large-screen processor so that the large-screen processor controls the corresponding screen to display corresponding bar screen signals.
According to one manner of implementation of the second aspect of the present invention, the background rendering module includes:
and the first rendering unit is used for loading the HTML5 page by using a CEF interface of the browser and rendering by adopting an offline rendering mode.
According to one implementation manner of the second aspect of the present invention, the text element information includes a text parameter and a text display area parameter; the text rendering module comprises:
The creation unit is used for creating a transparent first texture object according to the character display area parameters;
the second rendering unit is used for creating a transparent second texture object according to the character parameters and rendering characters to the second texture object to obtain a rendered texture object;
the position calculation unit is used for dividing the time period of the display of the rendered texture object in the first texture object into a plurality of time points on average, calculating the position of the rendered texture object in the first texture object at each time point, and obtaining a position calculation result corresponding to each time point;
and the superposition processing unit is used for superposing the rendered texture object on the first texture object according to the position calculation result to obtain character images corresponding to all time points, establishing a mapping relation between the time stamp of the character image generation and the corresponding character image, and splicing the time stamp from small to large into a character image sequence.
According to one manner in which the second aspect of the present invention can be implemented, the position calculation unit includes:
a time point determining subunit, configured to determine a time interval between two adjacent time points at a frame rate of 60pfs, and determine each time point in the time period according to the time interval.
According to one manner in which the second aspect of the present invention can be implemented, the position calculation unit includes:
a computing subunit configured to calculate a position of the rendered texture object in the first texture object according to:
Figure BDA0003967792740000051
wherein Texture1_x represents the abscissa of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, texture1_y represents the ordinate of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, and w Texture0 For the pixel width, w, of the first texture object Texture1 For the pixel width, t, of the rendered texture object %totalTime For the corresponding time of the current time point in the time period, total time is the duration of the time period, texture0_y is the ordinate, h of the first Texture object Texture0 For the pixel height, h, of the first texture object Texture1 Is the pixel height of the rendered texture object.
According to one implementation manner of the second aspect of the present invention, the superposition processing unit includes:
the texture processing subunit is used for processing the first texture object to restore the first texture object to be transparent texture after obtaining the text image corresponding to one time point;
And the superposition subunit is used for superposing the rendered texture object to the processed first texture object according to the position calculation result corresponding to the next time point to obtain a corresponding text image.
According to one manner in which the second aspect of the present invention can be implemented, the synthesis module includes:
and the video generating unit is used for generating the video file at the frame rate of 60 pfs.
The third aspect of the present invention provides a device for smoothing a signal image of a strip screen on a digital spliced wall, comprising:
a memory for storing instructions; the instruction is used for realizing the method for smoothing the signal image of the strip screen on the digital spliced wall in the mode which can be realized by any one of the above steps;
and the processor is used for executing the instructions in the memory.
A fourth aspect of the present invention is a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements a method for smoothing a signal image of a bar screen on a digital spliced wall according to any one of the above modes.
From the above technical scheme, the invention has the following advantages:
the method comprises the steps of obtaining display parameter configuration data of a bar screen signal, and generating an HTML5 page for recording background element information and a JSON file for recording text element information according to the display parameter configuration data; loading the HTML5 page by using a browser, rendering, storing the acquired rendering images of each frame as background images of the bar screen signals, and establishing a mapping relation between the time stamp of the acquired rendering images and the corresponding background images; rendering the text element information in the JSON file by using a rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of the time stamp, and storing the text image sequence; a mapping relation is established between each text image in the text image sequence and a timestamp when the corresponding text image is generated; merging each text image in the text image sequence with a background image corresponding to the timestamp to generate a corresponding video file; the video file is diverted and distributed to a large screen processor, so that the large screen processor controls a corresponding screen to display corresponding bar screen signals; according to the invention, the background rendering and the text rendering are separately processed, the obtained background image and text image sequences are combined to obtain the video file, the whole video file is converted and distributed, and finally, the large screen processor controls the corresponding screen to display corresponding bar screen signals, so that the bar screen signals can be superimposed without bar screen hardware, smooth display of the bar screen signals is realized, and bad experience caused by the blocking and jumping of text movement in the bar screen signals is avoided.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flowchart of a method for smoothing a digital tiled wall strip screen signal image according to an alternative embodiment of the present invention;
FIG. 2 is a schematic diagram of a display effect of a bar screen signal processed based on the method shown in FIG. 1 according to an alternative embodiment of the present invention;
fig. 3 is a block diagram illustrating the structural connection of a device for smoothing a signal image of a bar screen on a digital spliced wall according to an alternative embodiment of the present invention.
Reference numerals:
1-configuring a module; 2-a background rendering module; 3-a text rendering module; 4-synthesis module.
Detailed Description
The embodiment of the invention provides a method, a device and equipment for smoothing a bar screen signal image on a digital spliced wall, which are used for solving the technical problem that text display is not smooth in a generated video stream when a bar screen signal is overlapped on a main screen in the prior art.
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiments of the invention are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the computer system/server include, but are not limited to: personal computer systems, server computer systems, clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, minicomputers systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems, and the like.
The invention provides a method for smoothing a signal image of a strip screen on a digital spliced wall.
Referring to fig. 1, fig. 1 shows a flowchart of a method for smoothing a signal image of a strip screen on a digital spliced wall according to an embodiment of the invention.
The method for smoothing the signal image of the strip screen on the digital spliced wall provided by the embodiment of the invention comprises the steps S1-S5.
Step S1, display parameter configuration data of a bar screen signal are obtained, and an HTML5 page for recording background element information and a JSON file for recording text element information are generated according to the display parameter configuration data.
As a specific embodiment, for obtaining the display parameter configuration data, a corresponding parameter configuration interface may be set, and the user performs parameter configuration on the parameter configuration interface, so as to obtain the display parameter configuration data. Or, the display parameter configuration data sent by the user terminal may be obtained by sending a request to a preset user terminal. Or, the user directly uploads the parameter configuration file, and the display parameter configuration data is obtained through analyzing the parameter configuration file.
The display parameter configuration data comprises background element information and text element information. The background element information may include a background size and a background file, typically a picture or video. The text element information may include text parameters and text display area parameters; the text parameters may include the font, size, color, content, direction of motion of the text, time of one complete motion (i.e., time period displayed in the background); the text display area parameters may include basic information of a location, a size, a pixel unit, etc. of the text display area. In order to obtain a better display effect, the size of the text display area should be smaller than or equal to the background size.
HTML5 is a language description way of building Web content, JSON is a lightweight data interchange format. In the embodiment of the application, the HTML5 page and the JSON file are output to record the background element information and the text element information respectively, so that the background element information and the text element information are processed separately when the subsequent steps are executed. Compared with the mode that the background element information and the text element information are processed by the browser in the traditional method, the method and the device can improve processing efficiency.
And S2, loading the HTML5 page by using a browser, rendering, storing the acquired rendering images of each frame as background images of the bar screen signals, and establishing a mapping relation between the time stamps of the acquired rendering images and the corresponding background images.
In one implementation, the HTML5 page is loaded using the CEF interface of the browser and rendered using an offline rendering mode.
The CEF interface is a set of functional interfaces of the chrome browser that can support the HTML5 standard. And rendering is performed in an off-line rendering mode, so that finer picture effects can be obtained. Whenever the CEF interface draws a rendered image of a frame of HTML5 page to a memory block, the rendered image is copied from the memory block and named as a background image, e.g. named as memory_bg, and acquired The current PC system time time_bg is used as a time stamp of the acquired rendering image, and a mapping relation is established between the time stamp of the acquired rendering image and the corresponding background image, so that the image is obtained<time_bg a ,memery_bg a >Where a denotes an image number, numbering is performed in order of from small to large according to the time stamp of the acquired rendering image. After the acquired rendering images of each frame are stored as background images of the bar screen signals, the stored data can be expressed as:
{<time_bg 0 ,memery_bg 0 >,<time_bg 1 ,memery_bg 1 >,...,<time_bg n ,memery_bg n >}
where n represents the number of background images.
Step S3, rendering the text element information in the JSON file by using a rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of the time stamp, and storing the text image sequence; and a mapping relation is established between each text image in the text image sequence and the timestamp of the corresponding text image when the corresponding text image is generated.
In embodiments of the present application, the rendering engine interface may be OpenGL, directX or other rendering engine interface.
In one implementation manner, the rendering the text element information in the JSON file by using a rendering engine interface to obtain a text image sequence arranged according to a sequence from small to large of time stamps includes:
Creating a transparent first texture object according to the character display area parameters;
creating a transparent second texture object according to the text parameters and rendering text to the second texture object to obtain a rendered texture object;
dividing the time period of the rendered texture object displayed in the first texture object into a plurality of time points on average, and calculating the position of the rendered texture object in the first texture object when each time point is calculated to obtain a position calculation result corresponding to each time point;
and superposing the rendered texture object on the first texture object according to the position calculation result to obtain a text image corresponding to each time point, establishing a mapping relation between the time stamp of the text image generation and the corresponding text image, and splicing the text images into a text image sequence according to the sequence from the small time stamp to the large time stamp.
In the embodiment of the application, when creating the transparent first Texture object, the Texture size is the display area size represented by the text display area parameter, the position is (Texture 0_x, texture 0_y), the upper left corner of the background image is taken as the origin, and the pixel width is w Texture0 The pixel height is h Texture0
In the embodiment of the application, when the transparent second texture object is created, rendering text in the second texture object according to the text parameters (including font, font size, color and character content) recorded in the JSON file. Calculating the pixel height h of the second texture object according to the size of the word size Texture1 Calculating the pixel width w of the second texture object according to the word size and the character content Texture1 . The created second texture object may be converted to the rendered texture object by recording the complete character content and then merged into the appropriate location for the first texture object. When the positions of the rendered texture objects in the first texture object are different at different moments, the effect of text movement is generated.
In the embodiment of the application, when the timestamp generated by the text image and the corresponding text image are mapped, the corresponding stored data can be expressed as<time_te b ,memery_te b >Where b represents an image number, numbering is performed in order of decreasing time stamp at the time of text image generation. The text image sequences are spliced according to the sequence from the small time stamp to the large time stamp, and the corresponding stored data can be expressed as follows:
{<time_te 0 ,memery_te 0 >,<time_te 1 ,memery_te 1 >,...,<time_te N ,memery_te N >}
where N represents the number of text images in the sequence of text images.
In one implementation manner, the dividing the time period for displaying the rendered texture object in the first texture object into a plurality of time points includes:
and determining the time interval of two adjacent time points according to the frame rate of 60pfs, and determining each time point in the time period according to the time interval.
In the embodiment of the application, the time period of displaying the rendered texture object in the first texture object, namely the time interval from displaying the text in the first texture object to disappearing, is shown. Specifically, assuming that the text moves from right to left, the first texture object is taken as a display area, and the time period is that: the text starts to move left from the rightmost edge of the display area until the rightmost character of the text just disappears, which is the time of the whole process.
In one implementation, the calculating the position of the rendered texture object in the first texture object at each point in time includes:
calculating the position of the rendered texture object in the first texture object according to the following formula:
Figure BDA0003967792740000101
wherein Texture1_x represents the abscissa of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, texture1_y represents the ordinate of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, and w Texture0 For the pixel width, w, of the first texture object Texture1 For the pixel width, t, of the rendered texture object %totalTime For the corresponding time of the current time point in the time period, total time is the duration of the time period, texture0_y is the ordinate, h of the first Texture object Texture0 For the pixel height, h, of the first texture object Texture1 Is the pixel height of the rendered texture object.
In the examples of the present application, t %totalTime The corresponding time of the current time point in the time period is needed to be converted into the corresponding time in the time period when calculating. Since the text can be repeatedly displayed in the large screen, the display time of the same time point in any two time periods is different, and the conversion calculation of the time is required, and the corresponding time in the time period can be determined by solving the remainder of the time period. For example, the time period is set to 5s, and the time is counted from the first display of the text, at which time t=0; as long as the text display is actively stopped, t will always increase, and at a certain moment, for example, when t=32 seconds, t is a remainder of the time period, and the corresponding time is 2 seconds. That is, the display positions of the characters are the same at 32 seconds and 2 seconds.
In the embodiment of the present application, the calculation of the ordinate of the rendered texture object is performed with the aim of aligning the two textures. Note that, calculation of the ordinate of the rendered texture object may not be performed with the intermediate alignment of the two textures as a target. For example, the rendered texture object is set to be bottom aligned, top aligned, or other relative position to the first texture object. Accordingly, the calculation formula for adjusting the pixel height of the rendered texture object is:
Figure BDA0003967792740000111
Wherein, xi is a preset adjusting coefficient.
In one implementation manner, the overlaying the rendered texture object on the first texture object according to the position calculation result to obtain a text image corresponding to each time point includes:
processing the first texture object to restore the first texture object to be transparent texture after obtaining the text image corresponding to one time point;
and superposing the rendered texture object to the processed first texture object according to the position calculation result corresponding to the next time point to obtain a corresponding text image.
In the embodiment of the application, after a text image corresponding to a time point is obtained each time, the texture of the first texture object needs to be emptied, namely, the texture is restored to be transparent texture, and the texture object does not need to be changed after rendering, because the text content is recorded, the texture object does not change after rendering as long as the content is unchanged, the text is rendered each time, the position of the texture object after rendering only needs to be recalculated, and the texture resource is unchanged, so that the display memory resource can be saved, and the rendering efficiency is improved.
It should be noted that, since both the background image and the text image sequence need to be stored, the background image and the text image sequence may be stored in different memories, or may be stored in different positions in the same memory.
And S4, merging each text image in the text image sequence with the background image corresponding to the timestamp to generate a corresponding video file.
In the embodiment of the application, a background image and a text image sequence are fused, wherein the background image is taken as a bottom, the text image is overlapped on an upper layer, and the position of the background image is in a display area of the set text parameters, and a new image is formed by overlapping each time; after all the pictures are combined, a video file is generated and stored in a hard disk. Since the number of text image sequences and the number of background image sequences are different and the time is also different, the selection method of the background image and the text image is referred to herein. Specifically, a corresponding background image is found for each text image. For one image in any text image sequence, assuming that the time is time_text_x, finding a background image which satisfies time_bg_y < time_text_x < (time_bg_y+1), wherein the background image corresponding to time_bg_y is matched with the current text image. As one possible way, the video file may be generated from the generated image frame at a frame rate of 60 pfs.
And S5, converting the video file and distributing the video file to a large-screen processor so that the large-screen processor controls a corresponding screen to display corresponding bar screen signals.
To achieve a smoother effect, a large screen processor is required to display each frame at a stable frequency (e.g., every 16 ms), meaning that the signal distribution is also required to generate and distribute images at a stable frequency. However, in practical applications, the time required for generating each frame of image and converting it into a video stream in real time is often not equal, and sometimes requires 20ms, and sometimes 14ms. If the images are directly transmitted each time, the time interval for displaying the images by the large-screen processor is different, and the unsmooth adverse effect is generated. In the embodiment of the application, the video file is generated for the image frames first, and then the video file is distributed in a streaming mode, instead of directly distributing each frame of image after real-time streaming, the phenomenon that characters are moved in an unbalanced mode due to the fact that the time for generating each frame of image and converting each frame of image into the video stream is different, and the time for receiving the video frames by the large-screen processor is different can be avoided.
Taking the example of bar screen information showing "popular xxxxx instruction", the final display effect is shown in fig. 2.
According to the embodiment of the invention, the background rendering and the text rendering are separately processed, the obtained background image and text image sequences are combined to obtain the video file, the whole video file is converted and distributed, and finally, the large screen processor controls the corresponding screen to display corresponding bar screen signals, so that the bar screen signals can be overlapped under the condition of no bar screen hardware, the smooth display of the bar screen signals is realized, and bad experience caused by the blocking and jumping of text movement in the bar screen signals is avoided.
The invention also provides a device for smoothing the signal image of the strip screen on the digital spliced wall, which can be used for executing the method for smoothing the signal image of the strip screen on the digital spliced wall.
Referring to fig. 3, fig. 3 is a block diagram illustrating structural connection of a signal image smoothing device for a strip screen on a digital spliced wall according to an embodiment of the present invention.
The embodiment of the invention provides a device for smoothing a signal image of a strip screen on a digital spliced wall, which comprises the following components:
the configuration module 1 is used for acquiring display parameter configuration data of the bar screen signal, and generating an HTML5 page for recording background element information and a JSON file for recording text element information according to the display parameter configuration data;
the background rendering module 2 is used for loading the HTML5 page by using a browser and rendering, storing the acquired rendering images of each frame as background images of the bar screen signals, and establishing a mapping relation between the time stamp of the acquired rendering images and the corresponding background images;
the text rendering module 3 is used for rendering text element information in the JSON file by using a rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of time stamps, and storing the text image sequence; a mapping relation is established between each text image in the text image sequence and a timestamp when the corresponding text image is generated;
A synthesizing module 4, configured to combine each text image in the text image sequence with a background image corresponding to the timestamp to generate a corresponding video file;
and the distribution module 5 is used for diverting the video file and distributing the video file to the large-screen processor so that the large-screen processor controls the corresponding screen to display the corresponding bar screen signal.
In one implementation, the background rendering module 2 includes:
and the first rendering unit is used for loading the HTML5 page by using a CEF interface of the browser and rendering by adopting an offline rendering mode.
In one implementation, the text element information includes a text parameter and a text display area parameter; the text rendering module 3 includes:
the creation unit is used for creating a transparent first texture object according to the character display area parameters;
the second rendering unit is used for creating a transparent second texture object according to the character parameters and rendering characters to the second texture object to obtain a rendered texture object;
the position calculation unit is used for dividing the time period of the display of the rendered texture object in the first texture object into a plurality of time points on average, calculating the position of the rendered texture object in the first texture object at each time point, and obtaining a position calculation result corresponding to each time point;
And the superposition processing unit is used for superposing the rendered texture object on the first texture object according to the position calculation result to obtain character images corresponding to all time points, establishing a mapping relation between the time stamp of the character image generation and the corresponding character image, and splicing the time stamp from small to large into a character image sequence.
In one implementation, the location calculation unit includes:
a time point determining subunit, configured to determine a time interval between two adjacent time points at a frame rate of 60pfs, and determine each time point in the time period according to the time interval.
In one implementation, the location calculation unit includes:
a computing subunit configured to calculate a position of the rendered texture object in the first texture object according to:
Figure BDA0003967792740000141
wherein Texture1_x represents the abscissa of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, texture1_y represents the ordinate of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, and w Texture0 For the pixel width, w, of the first texture object Texture1 For the pixel width, t, of the rendered texture object %totalTime For the corresponding time of the current time point in the time period, total time is the duration of the time period, texture0_y is the ordinate, h of the first Texture object Texture0 For the pixel height, h, of the first texture object Texture1 Is the pixel height of the rendered texture object.
In one implementation, the superposition processing unit includes:
the texture processing subunit is used for processing the first texture object to restore the first texture object to be transparent texture after obtaining the text image corresponding to one time point;
and the superposition subunit is used for superposing the rendered texture object to the processed first texture object according to the position calculation result corresponding to the next time point to obtain a corresponding text image.
In one possible implementation, the synthesis module 4 comprises:
and the video generating unit is used for generating the video file at the frame rate of 60 pfs.
The invention also provides a device for smoothing the signal image of the strip screen on the digital spliced wall, which comprises the following components:
a memory for storing instructions; the instructions are used for implementing the method for smoothing the signal image of the strip screen on the digital spliced wall according to any one of the embodiments;
And the processor is used for executing the instructions in the memory.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a computer program, and the computer program realizes the method for smoothing the signal image of the digital spliced wall panel according to any one of the embodiments.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus, device and module described above may refer to corresponding procedures in the foregoing method embodiments, and specific beneficial effects of the apparatus, device and module described above may refer to corresponding beneficial effects in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus, device, and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The method for smoothing the signal image of the strip screen on the digital spliced wall is characterized by comprising the following steps of:
acquiring display parameter configuration data of a bar screen signal, and generating an HTML5 page for recording background element information and a JSON file for recording text element information according to the display parameter configuration data;
loading the HTML5 page by using a browser, rendering, storing the acquired rendering images of each frame as background images of the bar screen signals, and establishing a mapping relation between the time stamp of the acquired rendering images and the corresponding background images;
rendering the text element information in the JSON file by using a rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of the time stamp, and storing the text image sequence; a mapping relation is established between each text image in the text image sequence and a timestamp when the corresponding text image is generated;
Merging each text image in the text image sequence with a background image corresponding to the timestamp to generate a corresponding video file;
and converting the video file and distributing the video file to a large-screen processor so that the large-screen processor controls the corresponding screen to display corresponding bar screen signals.
2. The method for smoothing the signal image of the strip screen on the digital spliced wall according to claim 1, wherein the loading and rendering the HTML5 page by using a browser comprises:
and loading the HTML5 page by using a CEF interface of the browser, and rendering by adopting an offline rendering mode.
3. The method for smoothing a signal image of a bar screen on a digital spliced wall according to claim 1, wherein the text element information includes text parameters and text display area parameters; the method for rendering the text element information in the JSON file by using the rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of the time stamp comprises the following steps:
creating a transparent first texture object according to the character display area parameters;
creating a transparent second texture object according to the text parameters and rendering text to the second texture object to obtain a rendered texture object;
Dividing the time period of the rendered texture object displayed in the first texture object into a plurality of time points on average, and calculating the position of the rendered texture object in the first texture object when each time point is calculated to obtain a position calculation result corresponding to each time point;
and superposing the rendered texture object on the first texture object according to the position calculation result to obtain a text image corresponding to each time point, establishing a mapping relation between the time stamp of the text image generation and the corresponding text image, and splicing the text images into a text image sequence according to the sequence from the small time stamp to the large time stamp.
4. A method of smoothing a digitally stitched wall bar screen signal image according to claim 3, wherein said equally dividing the time period for which the rendered texture object is displayed in the first texture object into a plurality of time points comprises:
and determining the time interval of two adjacent time points according to the frame rate of 60pfs, and determining each time point in the time period according to the time interval.
5. A method of smoothing a digitally stitched wall bar screen signal image according to claim 3, wherein said calculating the position of the rendered texture object in the first texture object at each point in time comprises:
Calculating the position of the rendered texture object in the first texture object according to the following formula:
Figure FDA0003967792730000021
wherein Texture1_x represents the abscissa of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, texture1_y represents the ordinate of the rendered Texture object with respect to the upper left corner of the first Texture object at the current time point, and w Texture0 For the pixel width, w, of the first texture object Texture1 For the pixel width, t, of the rendered texture object %totalTime For the corresponding time of the current time point in the time period, total time is the duration of the time period, texture0_y is the ordinate, h of the first Texture object Texture0 For the pixel height, h, of the first texture object Texture1 Is the pixel height of the rendered texture object.
6. The method for smoothing a signal image of a bar screen on a digital tiled wall according to claim 3, wherein the overlaying the rendered texture object on the first texture object according to the position calculation result, to obtain a text image corresponding to each time point, includes:
processing the first texture object to restore the first texture object to be transparent texture after obtaining the text image corresponding to one time point;
And superposing the rendered texture object to the processed first texture object according to the position calculation result corresponding to the next time point to obtain a corresponding text image.
7. The method of claim 1, wherein merging each text image in the sequence of text images with a corresponding time-stamped background image to generate a corresponding video file comprises:
the video file is generated at a frame rate of 60 pfs.
8. The utility model provides a digital concatenation wall strip screen signal image smoothing device which characterized in that includes:
the configuration module is used for acquiring display parameter configuration data of the bar screen signal, and generating an HTML5 page for recording background element information and a JSON file for recording text element information according to the display parameter configuration data;
the background rendering module is used for loading the HTML5 page by using a browser and rendering, storing the acquired rendering images of each frame as background images of the bar screen signals, and establishing a mapping relation between the time stamp of the acquired rendering images and the corresponding background images;
the text rendering module is used for rendering text element information in the JSON file by utilizing a rendering engine interface to obtain a text image sequence which is arranged according to the sequence from small to large of time stamps, and storing the text image sequence; a mapping relation is established between each text image in the text image sequence and a timestamp when the corresponding text image is generated;
The synthesizing module is used for merging each text image in the text image sequence with the background image corresponding to the timestamp to generate a corresponding video file;
and the distribution module is used for diverting the video file and distributing the video file to the large-screen processor so that the large-screen processor controls the corresponding screen to display corresponding bar screen signals.
9. A digital splice wall panel signal image smoothing device, comprising:
a memory for storing instructions; the instructions are used for realizing the method for smoothing the signal image of the strip screen on the digital spliced wall according to any one of claims 1 to 7;
and the processor is used for executing the instructions in the memory.
10. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when executed by a processor, the computer program implements the method for smoothing a digital spliced wall panel signal image according to any one of claims 1 to 7.
CN202211501367.2A 2022-11-28 2022-11-28 Method, device and equipment for smoothing signal image of strip screen on digital spliced wall Pending CN115996302A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211501367.2A CN115996302A (en) 2022-11-28 2022-11-28 Method, device and equipment for smoothing signal image of strip screen on digital spliced wall

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211501367.2A CN115996302A (en) 2022-11-28 2022-11-28 Method, device and equipment for smoothing signal image of strip screen on digital spliced wall

Publications (1)

Publication Number Publication Date
CN115996302A true CN115996302A (en) 2023-04-21

Family

ID=85989610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211501367.2A Pending CN115996302A (en) 2022-11-28 2022-11-28 Method, device and equipment for smoothing signal image of strip screen on digital spliced wall

Country Status (1)

Country Link
CN (1) CN115996302A (en)

Similar Documents

Publication Publication Date Title
US8582952B2 (en) Method and apparatus for identifying video transitions
CN109831662B (en) Real-time picture projection method and device of AR (augmented reality) glasses screen, controller and medium
KR102461919B1 (en) Technology to capture and edit dynamic depth images
US20080260347A1 (en) Temporal occlusion costing applied to video editing
CN112596843B (en) Image processing method, device, electronic equipment and computer readable storage medium
WO2021135320A1 (en) Video generation method and apparatus, and computer system
CN109636885B (en) Sequential frame animation production method and system for H5 page
CN105930464B (en) Web rich media cross-screen adaptation method and device
CN114071223A (en) Optical flow-based video interpolation frame generation method, storage medium and terminal equipment
CN111161392A (en) Video generation method and device and computer system
CN113542624A (en) Method and device for generating commodity object explanation video
KR20140040875A (en) Cartoon providing system, cartoon providing device and cartoon providing method
CN113015007A (en) Video frame insertion method and device and electronic equipment
US9036941B2 (en) Reducing moiré patterns
CN112528596A (en) Rendering method and device for special effect of characters, electronic equipment and storage medium
CN107506119A (en) Picture display method, device, equipment and storage medium
CN115996302A (en) Method, device and equipment for smoothing signal image of strip screen on digital spliced wall
US20220060801A1 (en) Panoramic Render of 3D Video
CN114928718A (en) Video monitoring method and device, electronic equipment and storage medium
JP2004201004A (en) Three-dimensional video display device, program and recording medium
CN110662099B (en) Method and device for displaying bullet screen
WO2024087971A1 (en) Method and apparatus for image processing, and storage medium
CN113114955B (en) Video generation method and device and electronic equipment
CN113207037B (en) Template clipping method, device, terminal, system and medium for panoramic video animation
CN115661306A (en) Display device and display method of dynamic digital ink-wash painting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination