CN116506671A - Self-adaptive display method, system and storage medium for live stream picture real-time annotation - Google Patents

Self-adaptive display method, system and storage medium for live stream picture real-time annotation Download PDF

Info

Publication number
CN116506671A
CN116506671A CN202310558730.2A CN202310558730A CN116506671A CN 116506671 A CN116506671 A CN 116506671A CN 202310558730 A CN202310558730 A CN 202310558730A CN 116506671 A CN116506671 A CN 116506671A
Authority
CN
China
Prior art keywords
coordinates
canvas area
texture
canvas
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310558730.2A
Other languages
Chinese (zh)
Inventor
陈泽
吴波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Weisaike Network Technology Co ltd
Original Assignee
Nanjing Weisaike Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Weisaike Network Technology Co ltd filed Critical Nanjing Weisaike Network Technology Co ltd
Priority to CN202310558730.2A priority Critical patent/CN116506671A/en
Publication of CN116506671A publication Critical patent/CN116506671A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43074Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on the same device, e.g. of EPG data or interactive icon with a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43079Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of additional data with content streams on multiple devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4854End-user interface for client configuration for modifying image parameters, e.g. image brightness, contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a self-adaptive display method, a system and a storage medium for live streaming picture real-time annotation, belonging to the technical field of streaming media processing, wherein the method comprises the following steps: the canvas area is an initial canvas area, the height-width ratio of the initial canvas area is calculated, and the center point of the initial canvas area is set to be an anchor point; receiving synchronous annotation data; detecting whether the canvas area height-width ratio in the synchronous annotation data is consistent with the initial canvas area height-width ratio or not: if the coordinates are consistent, calculating a scaling ratio, and adjusting the size of the falling point coordinates according to the scaling ratio to obtain corrected coordinates; if the coordinates are inconsistent, calculating offset, calculating secondary offset, and adjusting the size of the falling point coordinates according to the secondary offset to obtain corrected coordinates; and coloring and restoring annotation lines according to the corrected coordinates and the texture colors, and rendering the drawing lines into a canvas area of the client to be displayed. And the display of the endorsements on the screens of different clients is met through the falling point coordinate self-adaption.

Description

Self-adaptive display method, system and storage medium for live stream picture real-time annotation
Technical Field
The invention relates to the technical field of streaming media processing, in particular to a self-adaptive display method, a system and a storage medium for live streaming picture real-time annotation.
Background
In the different-place communication and discussion scene, shared content needs to be visually displayed by means of a shared desktop or live broadcast mode, for example, the operation of displaying the environment before the eyes by dialing a video or displaying the document content through a live broadcast platform is needed, but when sharing is performed, other viewers only can annotate the live broadcast picture by only capturing the picture, so that the problems of poor stability and definition of the marked picture, unfixed size of the captured picture, poor real-time performance of the content, easiness in missing the marked key point and the like are caused.
Therefore, the prior art mentions a means for annotating live video in the process of live sharing files, but the annotating method has poor effect when the annotating method is used for displaying different screen sizes, and cannot adapt to the display of annotating content under different screens.
Disclosure of Invention
The invention aims to provide the self-adaptive display method, the self-adaptive display system and the storage medium for live streaming picture real-time annotation, which are used for solving the problem that the annotation content is poor in display effect under different sizes of display screens, and have the advantages of being difficult to be influenced by the screen size, strong in practicability and better in use effect by using self-adaptively adjusting the coordinates of the dropping point of a painting brush to synchronously display the annotation content.
In a first aspect, the present invention achieves the above object by the following technical solution, and an adaptive display method for live streaming picture real-time annotation includes the following steps:
accessing a live channel to obtain a live stream picture, setting a region displayed in a display screen of the live stream picture as a canvas region, wherein the canvas region is an initial canvas region, calculating the height-width ratio of the initial canvas region, and setting the central point of the initial canvas region as an anchor point;
receiving synchronous annotation data, wherein the synchronous annotation data is coordinates of a drop point of a painting brush, texture colors of the lines, and side lengths and aspect ratios of a canvas area when other clients draw annotation lines in the canvas area;
detecting whether the canvas area height-width ratio in the synchronous annotation data is consistent with the initial canvas area height-width ratio or not:
if the two sides of the canvas area are consistent, calculating the scaling ratio between each side of the received canvas area and each side corresponding to the current canvas area, and adjusting the size of the falling point coordinates according to the scaling ratio to obtain corrected coordinates;
if the coordinates are inconsistent, calculating the offset between the received coordinates of the center point of the canvas area and the coordinates of the anchor point of the initial canvas area, and then calculating the secondary offset between the coordinates of the center point of the current canvas area and the coordinates of the anchor point of the initial canvas area, and adjusting the size of the coordinates of the falling point according to the secondary offset to obtain corrected coordinates;
and coloring and restoring annotation lines according to the corrected coordinates and the texture colors, and rendering the restored drawing lines into a canvas area of the client for display.
Preferably, the method for acquiring the corrected coordinates further includes:
calculating the distance L between the coordinates of the falling point and each side of the received canvas area;
calculating the corresponding proportion between each side of the current canvas area and each side of the received canvas area;
and scaling the distance L between the sides according to the corresponding proportion, and searching coordinate points conforming to the distance L in the current canvas area to serve as correction coordinates.
Preferably, the recording method of the drop point coordinates of each frame of the brush includes:
establishing a UV space coordinate system of the canvas area, wherein the UV space coordinate system takes an anchor point as a coordinate origin;
monitoring a painting brush message, starting a painting brush function, judging whether a contact point between the painting brush and a display is in a canvas area, and ending if not;
if so, recording the relative distance between the contact point and the origin, and calculating the falling point coordinate of the painting brush according to the difference value between the relative distance and the origin.
Preferably, the method further comprises the step that the number of pixels occupied by the falling point coordinates on the canvas area is consistent with the number of pixels occupied by the thickness of the single-frame line drawing.
Preferably, the method for restoring annotation lines according to the corrected coordinates and the texture color coloring comprises the following steps:
inputting the corrected coordinates into a vertex shader to perform UV space conversion, so that the corrected coordinates are converted into texture coordinates in the texture space of the canvas area;
correcting offset and scaling of texture coordinates according to the distribution of pixel points in the texture color, and rendering an original line texture;
and sampling the color of the line texture and outputting the line color.
Preferably, the method for rendering the restored drawing line to the canvas area of the client for display includes:
copying the line textures into a canvas area of the client;
superposing and mixing the copied line textures with texture maps of canvas areas of the client by using a screen post-processing method based on a Unity3D game engine;
rendering a display result in a canvas area of the client.
Preferably, the data packet for receiving the synchronization annotation data includes all the drop point coordinates of each frame of the painting brush, the texture color of the line, the length and width ratio of each side of the canvas area, or the texture color of the drop point coordinate line of a single frame of the painting brush, and the length and width ratio of each side of the canvas area.
In a second aspect, the present invention achieves the above object by a technical solution, which is an adaptive display system for live streaming picture real-time annotation, the system comprising:
the canvas unit is used for acquiring a live stream picture with a default aspect ratio and a default size as an initial canvas area after the client accesses the live channel, and setting the central point of the initial canvas area as an anchor point;
the UV space establishing unit is used for establishing a UV space coordinate system taking an anchor point as an origin, and the UV space coordinate system is used for recording the falling point coordinates of the painting brush;
the annotating unit is used for monitoring the painting brush information and recording the drop point coordinates and the texture colors of each frame of the painting brush when a user draws lines;
the synchronous unit is used for synchronously annotating data to other clients in the live line, wherein the synchronous annotating data comprises the falling point coordinates of the painting brushes, the texture colors of lines and the side length and the aspect ratio of the canvas area;
the self-adapting unit is used for detecting whether the height-width ratio of the canvas area in the synchronous annotation data is consistent with the height-width ratio of the initial canvas area or not: if the two sides of the canvas area are consistent, calculating the scaling ratio between each side of the received canvas area and each side corresponding to the current canvas area, and adjusting the size of the falling point coordinates according to the scaling ratio to obtain corrected coordinates; if the coordinates are inconsistent, calculating the offset between the received coordinates of the center point of the canvas area and the coordinates of the anchor point of the initial canvas area, and then calculating the secondary offset between the coordinates of the center point of the current canvas area and the coordinates of the anchor point of the initial canvas area, and adjusting the size of the coordinates of the falling point according to the secondary offset to obtain corrected coordinates;
and the restoration display unit is used for coloring and restoring annotation lines according to the corrected coordinates output by the self-adaptive unit and the texture colors output by the synchronous unit, and rendering the restored drawing lines into a canvas area of the client for display.
Preferably, the reduction display unit comprises a shader module and a screen post-processing module;
the shader module is used for converting the corrected coordinates into texture coordinates in a texture space of a canvas area, correcting the offset and scaling of the texture coordinates according to the distribution of texture pixel points, rendering an original line texture, sampling the color of the line texture and outputting the line color;
the screen post-processing module is used for copying the line textures into a canvas area of the client, superposing and mixing the copied line textures with texture maps of the canvas area of the client by using a screen post-processing method based on the Unity3D game engine, and rendering a display result in the canvas area of the client.
In a third aspect, the present invention achieves the above object by a storage medium having a computer program stored thereon, which when executed by a processor, implements the adaptive display method for live streaming pictures as described in the first aspect.
Compared with the prior art, the invention has the beneficial effects that: when a user accesses a live broadcast picture for the first time, the size and the height-width ratio of a displayed canvas area are consistent, the canvas area at the moment is an initial canvas area, when the size of the canvas area is changed due to different screen sizes of clients in the canvas area, annotated lines need to be rendered out in the canvas area of a display end after self-adaption, the self-adaption mode can correct the drop point coordinates of a drawing pen by using a scaling ratio or offset, and then the corrected coordinates are used for coloring and restoring rendering, so that the annotated lines can be accurately displayed and restored out when the canvas area is changed in size, the adaptability is high, and the actual use experience is improved.
Drawings
Fig. 1 is a flowchart of a method for adaptively displaying live streaming picture real-time annotation according to the present invention.
Fig. 2 is a schematic illustration of a client display interface according to the present invention.
FIG. 3 is a flowchart of another method for obtaining corrected coordinates according to the present invention.
Fig. 4 is a schematic diagram of an adaptive display system for live streaming picture real-time annotation according to the present invention.
Description of the embodiments
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, an adaptive display method for live streaming picture real-time annotation includes the following steps:
step S1, accessing a live channel to acquire a live stream picture, setting a region of the live stream picture displayed in a display screen as a canvas region, wherein the canvas region is an initial canvas region, calculating the height-width ratio of the initial canvas region, and setting the central point of the initial canvas region as an anchor point; when a client is connected to a live stream picture, the display size of the live stream picture is a default size, namely, a canvas area is an initial canvas area, but because the terminal equipment of a user using the client is different in size, for example, some clients are operated on a pc computer and some clients are operated on a mobile phone and a tablet, the size of the canvas area can be adjusted to enable the user to watch live contents more appropriately, the canvas area can be used for displaying the live stream picture, the function of drawing wholesale on the area by the user can be provided, the canvas area is taken as a display window of the live stream picture from the perspective of a viewer end, the canvas area is taken as a whole display screen from the perspective of a sharing end, the viewer end or the user of the sharing end can carry out annotating on the canvas area, and the whole 3D game engine is taken as an example, in the engine, the rendering area is equivalent to a part which is drawn into a material ball in a dot-line mode after the point is calculated and converted by a mouse (a drawing pen) drop point, and the drawing line can be displayed on the material ball by the user when the user draws a drawing tool on the drawing board, and the line can be realized.
S2, receiving synchronous annotation data, wherein the synchronous annotation data is coordinates of a falling point of a painting brush, texture colors of the painting brush, and side lengths and aspect ratios of a canvas area when other clients draw annotation lines in the canvas area; the calculation method of the falling point coordinates of the painting brush comprises the following steps:
the method comprises the steps of establishing a UV space coordinate system of a canvas area, wherein the UV space coordinate system takes an anchor point as a coordinate origin, the UV space coordinate is a two-dimensional texture coordinate, and the position of each point of the canvas area, where a painting brush acts on, can be expressed by using the coordinate by utilizing a mode of generating the UV space coordinate system of the canvas area, so that the line pattern drawn by a picture can be displayed by connecting the coordinates of all points of the painting brush, and the line pattern can be restored by a subsequent shader according to two data of the coordinate and the texture color more easily by adopting the two-dimensional texture coordinate;
monitoring a painting brush message, starting a painting brush function, judging whether a contact point between the painting brush and a display is in a canvas area, and ending if not;
if so, recording the relative distance between the contact point and the origin, and calculating the falling point coordinate of the painting brush according to the difference value between the relative distance and the origin.
The monitoring of the brush information comprises receiving a client to start a brush tool, the brush in the brush function can be controlled in one or more modes of mouse operation, handwriting operation or capacitance pen operation, the client judges whether to start recording and generating annotation data or not according to whether the brush tool of the monitoring client is started or not, the client can know that the user has the requirement of annotating by clicking the brush tool, according to different platforms of the running client, the user can draw annotation lines in a mode of a mouse, handwriting or an externally connected capacitance pen, for example, if the user uses a pc end to watch live broadcast, the brush tool is started by clicking the mouse, the annotation lines are annotated by dragging the mouse, if the user uses a mobile phone end to watch live broadcast, the brush tool is started by clicking touch, the annotation is performed by sliding a finger, if the user uses a tablet computer to watch the annotation data, the capacitance pen tool is required to be started by clicking the capacitance pen, the annotation lines are obtained by clicking the handwriting pen point in a mode, and according to different modes of the tool selection, if the modes of the pc or the capacitance can also meet the coexistence mode of the capacitance.
The number of the pixels occupied by the drop point coordinates on the canvas area is consistent with the number of the pixels occupied by the thickness of the single-frame drawing line.
Step S3, detecting whether the canvas area height-width ratio in the synchronous annotation data is consistent with the initial canvas area height-width ratio or not: the detection of the consistency is to detect and judge how the user makes the size change to the canvas area, if the aspect ratio is consistent, the user is indicated to only perform the same-ratio scaling to adjust the size of the canvas area, and if the aspect ratio is inconsistent, the user is indicated to adjust the size of the canvas area in a single-side stretching mode;
if the sizes are consistent, calculating the scaling ratio between each side length of the received canvas area and each side length corresponding to the current canvas area, and adjusting the size of the drop point coordinates according to the scaling ratio to obtain corrected coordinates;
if the coordinates are inconsistent, calculating the offset between the center point coordinates of the received canvas area and the anchor coordinates of the initial canvas area, then calculating the secondary offset between the center point coordinates of the current canvas area and the anchor coordinates of the initial canvas area, and adjusting the size of the drop point coordinates according to the secondary offset to obtain corrected coordinates.
S4, coloring and restoring annotation lines according to the corrected coordinates and the texture colors, and rendering the restored drawing lines into a canvas area of the client for display, wherein the coloring and restoring steps are divided into two steps of coloring and restoring and rendering, and the coloring and restoring steps are as follows:
the method comprises the steps that the modified coordinates are input into a vertex shader to carry out UV space conversion, the modified coordinates are converted into texture coordinates in a texture space of a canvas area, the vertex shader is a part of graphics APIs (application program interfaces) such as OpenGL or DirectX and the like, the vertex shader is a first stage of a graphics rendering pipeline, the UV space coordinates are two-dimensional vectors, the texture coordinates are a technology for carrying out texture mapping on a UI surface, and the vertex shader normalizes the UV space coordinates and the size of a texture image of each input vertex during conversion, so that the UV coordinates can be converted into texture coordinates in a range of 0 to 1, and the normalized texture coordinates are output for processing by a subsequent graphics processing unit;
correcting the offset and scaling of the texture coordinates according to the distribution of the texture pixel points to render the original line texture, wherein the step requires a graphic processing unit to operate, and the correction is in the meaning of beautifying the rendered line, for example, some white edges or black edges generated around the line can be removed through correction;
the color of the line texture is sampled, the color of the line is output, and the color is sampled according to the texture color corresponding to the correction coordinates in the sampling annotation data, so that the color of the line can be obtained and output to the subsequent operation step.
After the vertex shader performs coloring and restoration, the output data is subjected to superposition and mixing until the vertex shader is subjected to superposition and mixing, and the output data can be displayed in canvas areas of other clients, so that even if lines and line colors are rendered and restored, the vertex shader cannot be seen by a user, and after superposition and mixing, the user can see the lines, and the method for rendering the restored drawn lines to the canvas areas of the clients comprises the following steps:
copying the line textures into a canvas area of the client;
the method is characterized in that a screen post-processing method based on a Unity3D game engine is utilized to superimpose and mix copied line textures with texture maps of canvas areas of the client, the screen post-processing method is a technology for processing rendered images into final effects, belongs to one mode in image processing, and is widely applied to mature technologies, so that the principle of the technology is not repeated too much;
rendering a display result in a canvas area of the client. The final display effect is shown in fig. 2.
In step S2, the data packet for receiving the synchronous annotation data includes all the drop point coordinates of each frame of the painting brush, the texture color of the line, the length and width ratios of each side of the canvas area, or the texture color of the line of the drop point coordinates of a single frame of the painting brush, and the length and width ratios of each side of the canvas area, and the synchronous annotation data may adopt the above two transmission modes, and in the first mode, since the data packet needs to include all the drop point coordinates of each frame, after the annotation is completed, the client that needs to annotate synchronizes out the synchronous annotation data, so that each client can view the annotation content; and when the second mode is adopted, the client side which does not need to wait for annotation finishes the annotation completely, only the drop point coordinates of a single frame are required to be transmitted, so that when the annotated client side continuously annotates, other client sides can check real-time annotation content, the two modes can be selected according to requirements, and the proper data transmission mode can be selected according to different application scenes, thereby improving the use experience of a user.
Example 2
As shown in fig. 3, compared with the method of adjusting the coordinates of the drop point to the corrected coordinates in embodiment 1, the method for obtaining the corrected coordinates may further be:
step 301, calculating the distance L between the coordinates of the drop point and each side of the received canvas area, wherein the general canvas area is of a rectangular structure, and the length from the point to four sides can be calculated under the condition that the value of the coordinates of the drop point is known;
step 302, calculating the corresponding proportion between each side of the current canvas area and each side of the received canvas area, wherein compared with the embodiment 1, the step does not need to judge what adjustment mode is adopted between the size of the current canvas area and the size of the received canvas area, namely, the comparison between the canvas area of a display client or an annotating client and the initial canvas area is not needed, and the calculation of the corresponding proportion between each side of the current canvas area and each side of the received canvas area is used for comparing the change ratio of the current canvas area relative to the received canvas area;
and 303, scaling the distance L between the edges according to the corresponding proportion, and finding out coordinate points conforming to the distance L in the current canvas area as correction coordinates, wherein the distance L between the edges can be restored to the same drop point coordinates of the original annotating client canvas area in the current canvas area only by adjusting the distance L according to the change ratio as the change ratio is known, so that the coordinates of the points meeting the adjustment condition are found out as the correction coordinates.
Example 3
As shown in fig. 4, an adaptive display system for live streaming picture real-time annotation, the system comprises:
the canvas unit is used for acquiring a live stream picture with a default aspect ratio and a default size as an initial canvas area after the client accesses the live channel, and setting the central point of the initial canvas area as an anchor point;
the UV space establishing unit is used for establishing a UV space coordinate system taking an anchor point as an origin, and the UV space coordinate system is used for recording the falling point coordinates of the painting brush;
the annotating unit is used for monitoring the painting brush information and recording the drop point coordinates and the texture colors of each frame of the painting brush when a user draws lines;
the synchronous unit is used for synchronously annotating data to other clients in the live line, wherein the synchronous annotating data comprises the falling point coordinates of the painting brushes, the texture colors of lines and the side length and the aspect ratio of the canvas area;
the self-adapting unit is used for detecting whether the height-width ratio of the canvas area in the synchronous annotation data is consistent with the height-width ratio of the initial canvas area or not: if the two sides of the canvas area are consistent, calculating the scaling ratio between each side of the received canvas area and each side corresponding to the current canvas area, and adjusting the size of the falling point coordinates according to the scaling ratio to obtain corrected coordinates; if the coordinates are inconsistent, calculating the offset between the received coordinates of the center point of the canvas area and the coordinates of the anchor point of the initial canvas area, and then calculating the secondary offset between the coordinates of the center point of the current canvas area and the coordinates of the anchor point of the initial canvas area, and adjusting the size of the coordinates of the falling point according to the secondary offset to obtain corrected coordinates;
and the restoration display unit is used for coloring and restoring annotation lines according to the corrected coordinates output by the self-adaptive unit and the texture colors output by the synchronous unit, and rendering the restored drawing lines into a canvas area of the client for display.
Preferably, the reduction display unit comprises a shader module and a screen post-processing module;
the shader module is used for converting the corrected coordinates into texture coordinates in a texture space of a canvas area, correcting the offset and scaling of the texture coordinates according to the distribution of texture pixel points, rendering an original line texture, sampling the color of the line texture and outputting the line color;
the screen post-processing module is used for copying the line textures into a canvas area of the client, superposing and mixing the copied line textures with texture maps of the canvas area of the client by using a screen post-processing method based on the Unity3D game engine, and rendering a display result in the canvas area of the client.
Since the methods adopted in embodiment 3 and embodiment 1 are substantially the same, the use principle between the respective unit modules will not be described in detail.
Example 4
The embodiment provides a storage medium, which comprises a storage program area and a storage data area, wherein the storage program area can store an operating system, a program required by running an instant messaging function and the like; the storage data area can store various instant messaging information, operation instruction sets and the like. A computer program is stored in the storage program area, and when the computer program is executed by a processor, the adaptive display method of live streaming picture real-time annotation described in embodiment 1 is implemented. The processor may include one or more Central Processing Units (CPUs) or a digital processing unit or the like.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (10)

1. The self-adaptive display method for live streaming picture real-time annotation is characterized by comprising the following steps of:
accessing a live channel to obtain a live stream picture, setting a region displayed in a display screen of the live stream picture as a canvas region, wherein the canvas region is an initial canvas region, calculating the height-width ratio of the initial canvas region, and setting the central point of the initial canvas region as an anchor point;
receiving synchronous annotation data, wherein the synchronous annotation data is coordinates of a drop point of a painting brush, texture colors of the lines, and side lengths and aspect ratios of a canvas area when other clients draw annotation lines in the canvas area;
detecting whether the canvas area height-width ratio in the synchronous annotation data is consistent with the initial canvas area height-width ratio or not:
if the two sides of the canvas area are consistent, calculating the scaling ratio between each side of the received canvas area and each side corresponding to the current canvas area, and adjusting the size of the falling point coordinates according to the scaling ratio to obtain corrected coordinates;
if the coordinates are inconsistent, calculating the offset between the received coordinates of the center point of the canvas area and the coordinates of the anchor point of the initial canvas area, and then calculating the secondary offset between the coordinates of the center point of the current canvas area and the coordinates of the anchor point of the initial canvas area, and adjusting the size of the coordinates of the falling point according to the secondary offset to obtain corrected coordinates;
and coloring and restoring annotation lines according to the corrected coordinates and the texture colors, and rendering the restored drawing lines into a canvas area of the client for display.
2. The adaptive display method of live streaming picture real-time annotation according to claim 1, wherein the method for acquiring the corrected coordinates further comprises:
calculating the distance L between the coordinates of the falling point and each side of the received canvas area;
calculating the corresponding proportion between each side of the current canvas area and each side of the received canvas area;
and scaling the distance L between the sides according to the corresponding proportion, and searching coordinate points conforming to the distance L in the current canvas area to serve as correction coordinates.
3. The adaptive display method of live streaming picture real-time annotation according to claim 1 or 2, wherein the recording method of the landing point coordinates of each frame of the brush comprises:
establishing a UV space coordinate system of the canvas area, wherein the UV space coordinate system takes an anchor point as a coordinate origin;
monitoring a painting brush message, starting a painting brush function, judging whether a contact point between the painting brush and a display is in a canvas area, and ending if not;
if so, recording the relative distance between the contact point and the origin, and calculating the falling point coordinate of the painting brush according to the difference value between the relative distance and the origin.
4. The adaptive display method of live streaming picture real-time annotation according to claim 1 or 2, further comprising the step of keeping the number of pixels occupied by the drop point coordinates on the canvas area consistent with the number of pixels occupied by the thickness of the single-frame line drawing.
5. The method for adaptively displaying live streaming picture real-time annotation according to claim 1 or 2, wherein the method for coloring and restoring annotation lines according to the corrected coordinates and the texture colors comprises the following steps:
inputting the corrected coordinates into a vertex shader to perform UV space conversion, so that the corrected coordinates are converted into texture coordinates in the texture space of the canvas area;
correcting offset and scaling of texture coordinates according to the distribution of pixel points in the texture color, and rendering an original line texture;
and sampling the color of the line texture and outputting the line color.
6. The adaptive display method for live streaming picture real-time annotation according to claim 5, wherein the method for rendering the restored drawing line to the canvas area of the client for display comprises:
copying the line textures into a canvas area of the client;
superposing and mixing the copied line textures with texture maps of canvas areas of the client by using a screen post-processing method based on a Unity3D game engine;
rendering a display result in a canvas area of the client.
7. The adaptive display method of live streaming picture real-time annotation according to claim 1 or 2, wherein the data packet for receiving the synchronous annotation data comprises all drop point coordinates of each frame of the painting brush, texture colors of lines, and the aspect ratio and the side length of the canvas area, or the texture colors of the drop point coordinate lines of a single frame of the painting brush, and the aspect ratio and the side length of the canvas area.
8. An adaptive display system for live streaming picture real-time annotation, the system comprising:
the canvas unit is used for acquiring a live stream picture with a default aspect ratio and a default size as an initial canvas area after the client accesses the live channel, and setting the central point of the initial canvas area as an anchor point;
the UV space establishing unit is used for establishing a UV space coordinate system taking an anchor point as an origin, and the UV space coordinate system is used for recording the falling point coordinates of the painting brush;
the annotating unit is used for monitoring the painting brush information and recording the drop point coordinates and the texture colors of each frame of the painting brush when a user draws lines;
the synchronous unit is used for synchronously annotating data to other clients in the live line, wherein the synchronous annotating data comprises the falling point coordinates of the painting brushes, the texture colors of lines and the side length and the aspect ratio of the canvas area;
the self-adapting unit is used for detecting whether the height-width ratio of the canvas area in the synchronous annotation data is consistent with the height-width ratio of the initial canvas area or not: if the two sides of the canvas area are consistent, calculating the scaling ratio between each side of the received canvas area and each side corresponding to the current canvas area, and adjusting the size of the falling point coordinates according to the scaling ratio to obtain corrected coordinates; if the coordinates are inconsistent, calculating the offset between the received coordinates of the center point of the canvas area and the coordinates of the anchor point of the initial canvas area, and then calculating the secondary offset between the coordinates of the center point of the current canvas area and the coordinates of the anchor point of the initial canvas area, and adjusting the size of the coordinates of the falling point according to the secondary offset to obtain corrected coordinates;
and the restoration display unit is used for coloring and restoring annotation lines according to the corrected coordinates output by the self-adaptive unit and the texture colors output by the synchronous unit, and rendering the restored drawing lines into a canvas area of the client for display.
9. The adaptive display system of live streaming picture real-time annotation of claim 8, wherein the reduction display unit comprises a shader module and a screen post-processing module;
the shader module is used for converting the corrected coordinates into texture coordinates in a texture space of a canvas area, correcting the offset and scaling of the texture coordinates according to the distribution of texture pixel points, rendering an original line texture, sampling the color of the line texture and outputting the line color;
the screen post-processing module is used for copying the line textures into a canvas area of the client, superposing and mixing the copied line textures with texture maps of the canvas area of the client by using a screen post-processing method based on the Unity3D game engine, and rendering a display result in the canvas area of the client.
10. A storage medium having stored thereon a computer program which, when executed by a processor, implements an adaptive display method of live streaming pictures live annotation according to any of claims 1-7.
CN202310558730.2A 2023-05-18 2023-05-18 Self-adaptive display method, system and storage medium for live stream picture real-time annotation Pending CN116506671A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310558730.2A CN116506671A (en) 2023-05-18 2023-05-18 Self-adaptive display method, system and storage medium for live stream picture real-time annotation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310558730.2A CN116506671A (en) 2023-05-18 2023-05-18 Self-adaptive display method, system and storage medium for live stream picture real-time annotation

Publications (1)

Publication Number Publication Date
CN116506671A true CN116506671A (en) 2023-07-28

Family

ID=87328327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310558730.2A Pending CN116506671A (en) 2023-05-18 2023-05-18 Self-adaptive display method, system and storage medium for live stream picture real-time annotation

Country Status (1)

Country Link
CN (1) CN116506671A (en)

Similar Documents

Publication Publication Date Title
US10750121B2 (en) Modifying images from a camera
US11544818B2 (en) Enhancing high-resolution images with data from low-resolution images
Korkalo et al. Light-weight marker hiding for augmented reality
WO2016069669A2 (en) Modifying video call data
CN112017222A (en) Video panorama stitching and three-dimensional fusion method and device
US11206294B2 (en) Method for separating local and remote content in a camera-projector based collaborative system
US11880983B2 (en) Image enhancement system
CN113055615A (en) Conference all-in-one machine, screen segmentation display method and storage device
CN110390712B (en) Image rendering method and device, and three-dimensional image construction method and device
US7529418B2 (en) Geometry and view assisted transmission of graphics image streams
CN113469883B (en) Rendering method and device of dynamic resolution, electronic equipment and readable storage medium
CN107564084B (en) Method and device for synthesizing motion picture and storage equipment
CN116506671A (en) Self-adaptive display method, system and storage medium for live stream picture real-time annotation
CN116248912B (en) Method, system and storage medium for annotating live streaming picture in real time
CN106604105B (en) Calculate the method and device of HBBTV application image size
JP6103942B2 (en) Image data processing apparatus and image data processing program
Lee et al. Techniques for flexible image/video resolution conversion with heterogeneous terminals
CN116860112B (en) Combined scene experience generation method, system and medium based on XR technology
CN113709372B (en) Image generation method and electronic device
US20230409266A1 (en) System and terminal apparatus
CN107516339B (en) Information processing method and information processing device
CN112929628A (en) Virtual viewpoint synthesis method and device, electronic equipment and storage medium
CN115996302A (en) Method, device and equipment for smoothing signal image of strip screen on digital spliced wall
CN117278688A (en) Image processing circuit, method and device, chip, electronic equipment and main control chip
CN116205848A (en) Image special effect processing method and system based on mirror image setting switching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination