CN114168098A - Data synchronization method, device, equipment and storage medium of electronic whiteboard - Google Patents

Data synchronization method, device, equipment and storage medium of electronic whiteboard Download PDF

Info

Publication number
CN114168098A
CN114168098A CN202111510120.2A CN202111510120A CN114168098A CN 114168098 A CN114168098 A CN 114168098A CN 202111510120 A CN202111510120 A CN 202111510120A CN 114168098 A CN114168098 A CN 114168098A
Authority
CN
China
Prior art keywords
electronic whiteboard
signaling
rendering
operation instruction
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111510120.2A
Other languages
Chinese (zh)
Inventor
常乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Hongen Perfect Future Education Technology Co ltd
Original Assignee
Tianjin Hongen Perfect Future Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Hongen Perfect Future Education Technology Co ltd filed Critical Tianjin Hongen Perfect Future Education Technology Co ltd
Priority to CN202111510120.2A priority Critical patent/CN114168098A/en
Publication of CN114168098A publication Critical patent/CN114168098A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides a data synchronization method, a device, equipment and a storage medium of an electronic whiteboard, wherein the method comprises the following steps: responding to an operation instruction input in an electronic whiteboard displayed by first terminal equipment, executing a rendering flow corresponding to the operation instruction in an initial image to be processed through a rendering interface preset in a local service layer to obtain a target image, wherein the rendering interface is matched with the type of the first terminal equipment; generating a signaling object containing an operation object based on the operation instruction, wherein the operation object corresponds to the operation instruction; and synchronously transmitting the signaling object and/or the target image to the second terminal equipment with the electronic whiteboard. The method realizes the data synchronization of the electronic whiteboard among the multiple terminal devices, and can ensure the consistency of the display contents of the electronic whiteboard in each terminal device without designing multiple sets of rendering logics aiming at different operating systems or rendering engines, thereby improving the demonstration effect and the user experience when multiple users use the electronic whiteboard for interaction.

Description

Data synchronization method, device, equipment and storage medium of electronic whiteboard
Technical Field
The present invention relates to the field of electronic whiteboards, and in particular, to a method, an apparatus, a device, and a storage medium for synchronizing data of an electronic whiteboard.
Background
With the development of equipment intellectualization, an electronic Whiteboard (also called an interactive Whiteboard) has been widely applied to interactive scenes such as online teaching, video conference, video/audio live broadcast and the like.
Taking online teaching as an example, in the process of communication and interaction by using an electronic whiteboard, each user sees the same electronic whiteboard. For this reason, it is necessary to synchronously display the results drawn by multiple users on the electronic whiteboard, so as to achieve the effect that multiple users draw on the same electronic whiteboard together.
However, in the related art, a plurality of users often open the electronic whiteboard through different terminal devices, such as a mobile phone, a desktop, a tablet computer, a smart television, a smart projection device, and the like. However, when a plurality of terminal devices interact with one electronic whiteboard, the plurality of terminal devices are affected by network jitter, delay, packet loss, disorder and other problems, and may receive synchronous data in different sequences, even miss-connect part of the synchronous data, which may cause inconsistent rendering performance of the electronic whiteboard displayed by the plurality of terminal devices, so that the user may have a poor demonstration effect during the process of using the electronic whiteboard for interaction, which may affect user experience.
Therefore, a technical solution is needed to solve the problem of data synchronization of the electronic whiteboard.
Disclosure of Invention
The embodiment of the invention provides a data synchronization method, a data synchronization device, data synchronization equipment and a storage medium of an electronic whiteboard, which are used for realizing data synchronization of the electronic whiteboard across platforms and operating systems and improving the demonstration effect and user experience when multiple users use the electronic whiteboard for interaction.
In a first aspect, an embodiment of the present invention provides a data synchronization method for an electronic whiteboard, where the method includes:
responding to an operation instruction input in an electronic whiteboard displayed by first terminal equipment, executing a rendering process corresponding to the operation instruction in an initial image to be processed through a rendering interface preset in a local service layer to obtain a target image, wherein the rendering interface is matched with the type of the first terminal equipment;
generating a signaling object containing an operation object based on the operation instruction, wherein the operation object corresponds to the operation instruction;
and synchronously transmitting the signaling object and/or the target image to the second terminal equipment with the electronic whiteboard.
In a second aspect, an embodiment of the present invention provides a data synchronization apparatus for an electronic whiteboard, where the apparatus includes:
the rendering module is used for responding to an operation instruction input in the electronic whiteboard displayed by the first terminal device, executing a rendering process corresponding to the operation instruction in the initial image to be processed through a rendering interface preset in the local service layer to obtain a target image, wherein the rendering interface is matched with the type of the terminal device used for displaying the electronic whiteboard;
the generating module is used for generating a signaling object containing an operation object based on the operation instruction, wherein the operation object corresponds to the operation instruction;
and the synchronization module is used for synchronously transmitting the signaling object and/or the target image to the second terminal equipment with the electronic whiteboard.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, a communication interface; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to implement at least the data synchronization method of an electronic whiteboard of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a non-transitory machine-readable storage medium, on which executable code is stored, and when the executable code is executed by a processor of an electronic device, the processor is enabled to implement at least the data synchronization method of an electronic whiteboard according to the first aspect.
In the scheme provided by the embodiment of the invention, in response to an operation instruction input in an electronic whiteboard displayed by first terminal equipment, a rendering process corresponding to the operation instruction is executed in an initial image to be processed through a rendering interface preset in a local service layer to obtain a target image, wherein the rendering interface is matched with the type of the first terminal equipment; generating a signaling object containing an operation object based on the operation instruction, wherein the operation object corresponds to the operation instruction; and synchronously transmitting the signaling object and/or the target image to the second terminal equipment with the electronic whiteboard. In the scheme, the target images required to be displayed by the various types of first terminal equipment are rendered through the rendering interface preset in the local service layer, and the signaling object and/or the target images are synchronously transmitted to the second terminal equipment displaying the same electronic whiteboard, so that the data synchronization of the electronic whiteboard among the plurality of terminal equipment is realized, the consistency of the display contents of the electronic whiteboard in each terminal equipment is ensured, and the demonstration effect and the user experience when multiple users use the electronic whiteboard for interaction are greatly improved. In addition, by adopting the scheme, the data synchronization of the electronic whiteboard is realized, multiple sets of rendering logics are not required to be designed for different operating systems or rendering engines, the consistency of the display contents of the electronic whiteboard can be realized in multiple platforms such as an android operating system, an IOS operating system, a Unity engine and a Cocos engine, and the development and maintenance difficulty of the cross-platform electronic whiteboard is greatly reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a flowchart of a data synchronization method for an electronic whiteboard according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating an operation instruction processing flow according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a signaling synchronization process according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a data synchronization scenario according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a packet splitting procedure according to an embodiment of the present invention;
fig. 6 is a schematic diagram of another data synchronization scenario provided in an embodiment of the present invention;
fig. 7 is a schematic diagram of another data synchronization scenario provided in an embodiment of the present invention;
fig. 8 is a schematic diagram of another data synchronization scenario provided in an embodiment of the present invention;
FIG. 9 is a schematic diagram of a rendering pipeline according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a data synchronization apparatus of an electronic whiteboard according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device corresponding to the data synchronization apparatus of the electronic whiteboard provided in the embodiment shown in fig. 10.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The data synchronization method of the electronic whiteboard provided by the embodiment of the present invention may be executed by an electronic device, where the electronic device may be a terminal device with data processing capability, such as a computer, a notebook computer, a smart phone, or a server. The server may be a physical server including an independent host, or may also be a virtual server, or may also be a cloud server or a server cluster.
With the development of device intelligence and related software, electronic whiteboards are used in a wide variety of interactive scenarios, such as: during the video conference, calling an electronic whiteboard in video conference software to perform conference explanation demonstration; in the intelligent teaching process, subject knowledge is explained in modes of annotating, drawing and the like through an electronic whiteboard in an intelligent blackboard.
The electronic whiteboard can be used for directly writing or drawing like a common whiteboard or a teaching blackboard, and the difference lies in that the electronic whiteboard can be supported by special application programs and can be used for other electronic equipment, such as: the intelligent blackboard is characterized by comprising a notebook computer, a tablet computer, an intelligent blackboard and the like, wherein the notebook computer, the tablet computer, the intelligent blackboard and the like are communicated, and a writing result or a drawing result is electronically stored and displayed.
In a scenario using an electronic whiteboard, multiple users may interact with each other using the electronic whiteboard. In this process, the results drawn by multiple users need to be synchronously displayed on the electronic whiteboard, so that the electronic whiteboard seen by each user is the same, and the effect that multiple users draw on the same electronic whiteboard together is achieved.
However, in the related art, a plurality of users often open the electronic whiteboard through different terminal devices, such as a mobile phone, a desktop, a tablet computer, a smart television, a smart projection device, and the like. The related technologies mainly include frame synchronization, state synchronization, an ordered synchronization policy established based on a Transmission Control Protocol (TCP), and a User Datagram Protocol (UDP) based custom synchronization policy. However, these solutions have the following drawbacks:
first, the number of users supported by frame synchronization is limited. Taking online education as an example, frame synchronization is only applicable to small class scenes. If the number of the users accessing the class is too large, the operation data of all the users need to be synchronized in each frame, the calculation pressure of the client is too large, and the traffic is occupied too much.
Second, state synchronization is limited by the amount of data. If too many handwriting are drawn in the paging, the data volume required to be synchronized is large, the drawing delay is high, the condition that the handwriting is inconsistent with the hand operation occurs when a user draws, and the user interaction experience is influenced. Moreover, the server needs to perform calculation every time of synchronization, and the playback function of the electronic whiteboard is difficult to realize.
Thirdly, an ordered synchronization strategy constructed based on TCP is characterized in that the TCP belongs to a transport layer protocol and cannot be modified in an application layer where a client is located, so that conditions such as weak network and network jitter are difficult to meet the real-time requirement, the drawing delay is high, the drawing is easy to be stuck and disconnected, and the user experience is influenced.
Fourthly, the user-defined synchronization strategy based on the UDP leads to the situation that the disorder and even the loss of the signaling easily occur in the signaling transmission process because the length of a single signaling is limited.
Obviously, in the related art, when a plurality of terminal devices interact in the same electronic whiteboard, the plurality of terminal devices are affected by the problems of network jitter, delay, packet loss, disorder and the like, and may receive synchronous data in different sequences, even miss-connect part of the synchronous data to cause inconsistent rendering performance of the electronic whiteboard displayed by the plurality of terminal devices, so that the user has a poor demonstration effect obtained in the process of using the electronic whiteboard for interaction, and the user experience is affected. Therefore, how to implement data synchronization of the electronic whiteboard becomes a technical problem to be solved urgently.
Particularly, under the condition of multiple platforms and multiple engines, how to implement data synchronization of the electronic whiteboard across platforms and operating systems becomes a technical problem to be solved urgently.
An embodiment of the present invention provides a data synchronization method for an electronic whiteboard, as shown in fig. 1. Fig. 1 is a flowchart of a data synchronization method of an electronic whiteboard according to an embodiment of the present invention, which may include the following steps:
101. responding to an operation instruction input in an electronic whiteboard displayed by first terminal equipment, and executing a rendering process corresponding to the operation instruction in an initial image to be processed through a rendering interface preset in a native layer to obtain a target image.
102. And generating a signaling object containing the operation object based on the operation instruction.
103. And synchronously transmitting the signaling object and/or the target image to the second terminal equipment with the electronic whiteboard.
In the steps, target images required to be displayed by various types of first terminal equipment can be rendered in real time through a rendering interface preset in a local service layer, and the signaling object and/or the target images are synchronously transmitted to second terminal equipment displaying the same electronic whiteboard, so that the data synchronization of the electronic whiteboard among multiple terminal equipment is realized, the consistency of the display contents of the electronic whiteboard in each terminal equipment is ensured, and the demonstration effect and the user experience when multiple users use the electronic whiteboard for interaction are greatly improved.
In addition, by adopting the scheme, the data synchronization of the electronic whiteboard is realized, multiple sets of rendering logics are not required to be designed for different operating systems or rendering engines, the consistency of the display contents of the electronic whiteboard can be realized in multiple platforms such as an android operating system, an IOS operating system, a Unity engine and a Cocos engine, and the development and maintenance difficulty of the cross-platform electronic whiteboard is greatly reduced.
In step 101, in response to an operation instruction input in an electronic whiteboard displayed by a first terminal device, a rendering process corresponding to the operation instruction is executed in an initial image to be processed through a rendering interface preset in a local service layer to obtain a target image. Specifically, the first terminal device refers to a device used when the current user operates the electronic whiteboard. The electronic device may include an electronic device in which an application program including the electronic whiteboard function is located, for example: notebook computers, tablet computers, and the like; or a large screen, an intelligent blackboard, a digital drawing board and the like used as an electronic whiteboard, which is not limited to this.
The operation instructions input in the electronic whiteboard include, but are not limited to, drawing, copying, pasting, deleting (i.e., erasing), batch dragging, and undo/undone. Each operation instruction has its corresponding operation. In practical applications, the operation instruction may be, for example, a touch instruction sent by a stylus, a drawing instruction sent by a touch screen, or an instruction sent by selecting a certain brush control and dragging the brush control through a mouse. After the operation instruction of the user is obtained, the touch track corresponding to the operation instruction is obtained, and the obtained touch track is transmitted to the local service layer. In this embodiment, a user may input a touch track (Path) in an electronic whiteboard through an interactive operation with a terminal device. The touch trajectory includes, but is not limited to, geometric figures and/or characters that the user wants to draw in the electronic whiteboard. It is understood that the geometric figures include open figures and closed figures. Wherein, the open type figure comprises figures such as points, straight lines, arcs and the like; the closed pattern includes polygonal and elliptical patterns. For closed type graphics and open type graphics, due to different graphics types, the corresponding graphics feature differences are large, and the corresponding geometric graphics rendering modes are different.
The operation instructions include but are not limited to drawing, copying, pasting, deleting, and batch dragging. Optionally, if it is detected that the user issues any one of the above operation instructions to any one of the pages in the electronic whiteboard, the redrawn identifier corresponding to the page is updated, and the redrawn identifier is used to trigger the rendering process of the page. For example, a movement track which is intended to be sent out in the electronic whiteboard by the user through a stylus pen is identified according to the touch instruction, and the movement track is taken as a touch track. Specifically, the splitting may be performed according to a data type to which the user input data belongs, to obtain track information (such as a wire frame texture, a shape, a size, and the like) corresponding to the input data, so as to form a touch track corresponding to the operation instruction. For example, geometric figures and/or characters which are intended to be hand-drawn in the electronic whiteboard by the user are identified according to the drawing instruction, and tracks obtained by hand-drawing are used as touch tracks.
Besides, it can be assumed that the input mode of the electronic whiteboard by multiple users is a brush control. The brush control may be a stylus pen for issuing an operation instruction, or may be a virtual control provided in an application program. After the user selects a drawing type (for example, graffiti or a pen touch with a specific shape) corresponding to the brush control, drawing may be performed by using the corresponding drawing type, so that a touch track of a corresponding data type is generated in the electronic whiteboard, and the touch track is used for recording at least one position information input by the user. Optionally, the texture type and/or color corresponding to the touch track can be adjusted through the brush control, so that different types of brush strokes are created, the visual effect of the electronic whiteboard is further improved, and the communication mode of a user is enriched.
In this embodiment, the local service layer refers to a native platform used by a terminal device for presenting an electronic whiteboard. The native platform may be an operating system carried by the terminal device, such as an IOS operating system, an Android (Android) operating system, and a Windows operating system, or may be a rendering engine used by the terminal device, such as a Unity rendering engine and a cos rendering engine. In the local service layer, an application program preset in the terminal device and a bottom layer resource can be called. The interfaces and calling methods which can be called in each operating system or rendering engine are different.
The local service layer comprises an initial image to be processed. Here, to be processed means that an initial image needs to be rendered into the electronic whiteboard. In practice, the initial image to be processed may be implemented in a form matching the specific application scenario. Taking the online education scene as an example, the initial image may be the course content to be read, such as the content of each lesson in the series of courses, and the content of the picture book in the picture book reading. In practical application, the initial images may be obtained by selecting corresponding teaching materials from a teaching material library based on course requirements and then editing, may be obtained by scanning existing teaching materials (such as typeset sketches and textbooks), and may also be obtained by editing related personnel based on course requirements (such as teaching PPT). Taking a video conference as an example, the initial image may be a conference presentation prepared in advance by the participants, or may be a blank layout for the participants to communicate with each other.
In 101, a rendering flow corresponding to the operation instruction is executed in the initial image through a rendering interface preset in the local service layer to obtain a target image. In an optional example, assuming that the operation instruction is a drawing instruction, a rendering process corresponding to the drawing instruction is to obtain a touch track input by a user through an electronic whiteboard, call a rendering interface in a local service layer, and render a geometric figure corresponding to the touch track in an initial image to obtain a target image including the geometric figure. The rendering interface is matched with the type of the terminal equipment for displaying the electronic whiteboard, and particularly, the rendering interface matched with various platforms or operating systems can be preset in a local service layer for realizing the cross-platform and cross-system electronic whiteboard data synchronization process subsequently. In the embodiment, a native rendering interface carried by the terminal device can be used to improve rendering efficiency. For example, assuming that the type of the terminal device is an android device, a native rendering interface corresponding to the android device includes an open graphics library (OpenGL) interface. For example, assuming that the type of the terminal device is an IOS device, the native rendering interface corresponding to the IOS device includes a Metal interface (i.e., a kind of graphical programming interface). For example, assuming that the rendering engine used by the terminal device is a Unity rendering engine or a Cocos rendering engine, step 102 may also be implemented by using a native rendering interface corresponding to the above-mentioned rendering engine.
Specifically, in 101, a geometric figure corresponding to the touch track may be rendered on the basis of the initial image through a native rendering interface in the local service layer, so that the geometric figure is added to the initial image to obtain a target image to be rendered. In practical applications, the geometric figures include, but are not limited to, closed figures and open figures. It is further understood that in a specific application scenario, the geometric figures may also be combined into words, symbols, or other forms. The target image may be a texture map (also referred to as a target texture map in the following embodiments) containing the above-mentioned geometric figure, and an identifier of the texture map is stored in a video memory space for later transfer of the target image.
Taking an online education scene as an example, the geometric figure can be drawn by a teacher in cooperation with the explanation content and synchronously rendered into an electronic whiteboard at the student side, such as a blackboard writing of the teacher. Or the students draw when taking notes or asking questions and synchronize to the electronic white boards on other student sides or teacher sides, such as class notes or answering processes of the students. In practical application, optionally, the drawing end can select whether to authorize synchronous rendering of the geometric figures to other terminal sides, so as to avoid that synchronous contents are too confused and reveal user privacy.
102, generating a signaling object containing the operation object based on the operation instruction. Wherein, the operation object corresponds to the operation instruction. Specifically, the operation instructions include, but are not limited To, draw (Paint), Copy (Copy), Paste (Paste), Delete (Delete), drag (Move), jump (To Page), undo/undo. Here, the operation instructions such as copy, paste, delete, and drag may be executed individually or in batch. For example, a plurality of geometric figures may be deleted in batch in the electronic whiteboard, a plurality of geometric figures may be copied and pasted in batch, and one or more operation instructions may be revoked or revoked. In an optional embodiment, generating a corresponding operation object based on the operation instruction; and inputting the operation object into a signaling layer, and encapsulating the operation object into the signaling object through the signaling layer.
For example, taking the operation instruction processing flow shown in fig. 2 as an example, it is assumed that the electronic whiteboard is represented as a board, and it is assumed that one electronic whiteboard includes at least one Page (Page). Assume that the operation instructions include draw, copy, paste, delete, drag, jump (To Page). Based on the above assumption, the following operation instruction processing flow can be adopted for the operation instruction input in any Page, specifically:
in fig. 2, an Operation instruction input in any Page is acquired, the Operation instruction is input to an Operation Manager (Operation Manager), and the Operation Manager serializes the Operation instruction into Operation targets (Operations). Furthermore, the operation Manager inputs the operation object into the signaling Manager (Sig Manager, i.e., the signaling layer described above), and encapsulates the operation object into a signaling object through the signaling Manager, so as to output the signaling object including the operation object through the network Manager (Net Manager).
In 103, assuming that the target image is a texture map including the geometric figure, based on this, the identifier of the texture map may be transmitted to the terminal device in the video memory space, so as to directly complete multi-end sharing of the texture map in the video memory space, improve the transmission efficiency, and ensure synchronous rendering of the target image.
And 103, synchronously transmitting the signaling object and/or the target image to a second terminal device with the electronic whiteboard. Specifically, the first terminal device may transmit the signaling object and/or the target image to the second terminal device in a broadcast manner, so as to achieve synchronization of the signaling object and/or the target image between the first terminal device and the second terminal device. The second terminal device is a device which displays the same electronic whiteboard as the first terminal device. In fact, the first terminal device is the second terminal device of the other terminal devices displaying the same electronic whiteboard. For example, in an online education scenario, the second terminal device may be a device connected to the same live broadcast room as the first terminal device, in which case the electronic whiteboard displayed together may be used for teaching materials uploaded by the teacher in advance. Optionally, the first terminal device and the second terminal device may respectively display different pages of the same electronic whiteboard, for example, different pages included in the same course content, and different pages of the same drawing book.
In practical applications, the second terminal device may be an electronic device in which an application program including the electronic whiteboard function is located, for example: notebook computers, tablet computers, and the like; or a large screen, an intelligent blackboard, a digital drawing board and the like used as an electronic whiteboard, which is not limited to this.
In this embodiment, target images required to be displayed by multiple types of first terminal devices are rendered through a rendering interface preset in a local service layer, and a signaling object and/or the target images are synchronously transmitted to a second terminal device displaying the same electronic whiteboard, so that data synchronization of the electronic whiteboard among the multiple terminal devices is realized, consistency of display contents of the electronic whiteboard in each terminal device is ensured, and a demonstration effect and user experience when multiple users use the electronic whiteboard for interaction are greatly improved. In addition, by adopting the scheme, the data synchronization of the electronic whiteboard is realized, multiple sets of rendering logics are not required to be designed for different operating systems or rendering engines, the consistency of the display contents of the electronic whiteboard can be realized in multiple platforms such as an android operating system, an IOS operating system, a Unity engine and a Cocos engine, and the development and maintenance difficulty of the cross-platform electronic whiteboard is greatly reduced.
In the foregoing or the following embodiments, optionally, in 103, the synchronously transmitting the signaling object to the second terminal device on which the electronic whiteboard is shown includes:
sending the signaling object to a signaling transmission queue; serializing the signaling object into a data packet with a preset format in a signaling transmission queue, and uploading the data packet to a server; and broadcasting the data packet from the first terminal equipment to the second terminal equipment by adopting the server.
In order to prevent data congestion caused by excessive operation instructions needing to be synchronized within a certain time period and filter signaling which does not need to be synchronized, the embodiment of the invention also provides a signaling transmission queue. Specifically, the signaling is divided into Normal signaling (Normal Sig) and process signaling (Progress Sig). Since the process signaling includes three states of start, process, and end, in order to reduce the amount of synchronization data and improve the synchronization efficiency, optionally, a process signaling closest to the current time or an end signaling may be reserved on the first terminal device side. In practical applications, taking a dragging instruction as an example, a displacement vector may be recorded every frame according to a frequency of 15 frames per second, so as to synchronize the dragging instruction. However, in practical applications, the frequency of the drag call-back is much greater than 15 frames per second, so that much signaling generated during the drag process does not actually need to be synchronized.
Taking the signaling synchronization flow shown in fig. 3 as an example, after the user operates the electronic whiteboard through the first terminal device to obtain an Operation object (Operation), the Operation object is encapsulated as a signaling object (Sig) in the signaling layer. Further, the signaling object is sent to a signaling transmission Queue (Sig Queue), the signaling object is serialized in the signaling transmission Queue into a data Packet in a preset format, specifically, the signaling object is serialized in the signaling transmission Queue into signaling data in a protobuf format, and the signaling data in the protobuf format is transmitted to a communication layer and packed into a group of data packets in a Packet format. Further, the Packet-formatted data packets are uploaded to a Server (Server), specifically, the data packets uploaded by each first terminal device are arranged into a Packet Queue (Packet Queue) in the Server according to the receiving time, and the data packets from each first terminal device are synchronized and broadcasted to the second terminal device according to the time sequence.
Further, assuming that there are a plurality of first terminal devices, in this case, one optional implementation manner of broadcasting the data packet from the first terminal device to the second terminal device by using the server is as follows: after receiving the signaling objects respectively uploaded by the first terminal devices, the server broadcasts the signaling objects to the second terminal devices in sequence from first to last according to the uploading time of the signaling objects.
In the foregoing or following embodiments, further optionally, the second terminal device receives the signaling object, and parses a corresponding operation object from the signaling object through the operation manager; and executing an operation instruction corresponding to the operation object in the electronic whiteboard. Through the mode, the plurality of terminal equipment for displaying the same electronic whiteboard can be synchronized with the operation instruction, the consistency of the electronic whiteboard display content in each terminal equipment is ensured, and the demonstration effect and the user experience when multiple users use the electronic whiteboard for interaction are greatly improved.
In the foregoing step, optionally, the second terminal device receives the signaling object, and analyzes the corresponding operation object from the signaling object through the operation manager, and the method may be implemented as: the second terminal equipment receives a data packet with a preset format; deserializing the data packets into signaling objects by the operation manager; and analyzing the corresponding operation object from the signaling object.
Still taking the signaling synchronization flow shown in fig. 3 as an example, after the server synchronizes the data packets from each first terminal device to be broadcast to the second terminal device, the second terminal device receives a group of data packets in a Packet format in the communication layer, and uploads the received group of data packets in the Packet format to the signaling layer. Furthermore, in the signaling layer, the Packet-formatted data Packet is assembled into the signaling data in the protobuf format by the operation manager, and the signaling data in the protobuf format is deserialized into the signaling object. And further, analyzing the corresponding operation object from the signaling object. And finally, processing the operation object through the operation manager, and transmitting the operation object to the electronic whiteboard so as to analyze and execute the operation instruction corresponding to the operation object in the electronic whiteboard.
A specific implementation manner of analyzing and executing the operation instruction corresponding to the operation object in the electronic whiteboard is described below with reference to an embodiment.
In an alternative embodiment, it is assumed that the electronic whiteboard includes a plurality of pages, and each page is provided with a corresponding Undo Stack (Undo Stack) and/or a restore Stack (Redo Stack). It is assumed that the operation instructions comprise undo instructions and/or undo instructions. Based on the above assumption, executing the operation instruction corresponding to the operation object in the electronic whiteboard may be implemented by invoking, by the operation manager, the undo stack corresponding to any one page based on the undo instruction issued to any one page in the electronic whiteboard, so that the electronic whiteboard is switched to a state before the execution of the previous operation instruction. Or, based on a revocation instruction issued to any one of the pages in the electronic whiteboard, the operation manager may call the recovery stack corresponding to any one of the pages, so that the electronic whiteboard is switched to the state before the previous operation instruction is revoked.
For example, in fig. 2, the operation Manager further outputs the operand to a Undo operation Manager (Undo Manager) corresponding to each page, so as to maintain an Undo stack and a restore stack corresponding to the page, thereby performing an Undo or an Undo operation on any one or more operation instructions.
In the related art, when an image is transmitted to another terminal device, the image is usually copied from the GPU to the CPU, and then the image is processed by the CPU and transmitted to the other terminal device. Other terminal devices often need to copy the image to their own Graphics Processing Unit (GPU) to complete the synchronization of the image data.
For the problem of low transfer efficiency caused by repeatedly copying image data in the related art, in the embodiment of the present invention, optionally, in 103, an identifier of a target image in a GPU is obtained; and transmitting the identifier to the second terminal equipment so that the second terminal equipment renders the target image corresponding to the identifier through the GPU. Specifically, the identifier of the target image in the video memory space of the GPU is broadcast to the second terminal device, so that the second terminal device can call the target image from the video memory space directly through the identifier.
In an optional embodiment, it is assumed that the electronic whiteboard includes a plurality of pages, and each page is provided with a corresponding touch track buffer. Assume that the operation instructions include rendering instructions. Based on the above assumption, in the above step, executing the operation instruction corresponding to the operation object in the electronic whiteboard may be implemented as: based on a rendering instruction sent to any one page in the electronic whiteboard, acquiring an identifier of a target image in a GPU from a touch track buffer area corresponding to any one page through an operation manager; and rendering the target image corresponding to the identifier in the electronic whiteboard through the GPU.
Therefore, multi-end sharing of the texture maps can be directly completed in the video memory space, data synchronization efficiency is improved, consumption of CPU, GPU and memory resources is reduced, and synchronous rendering of the target images is further guaranteed. The target image in this embodiment may be a texture map, such as a 2D texture map.
It should be noted that, in addition to the operation instructions, operation flows corresponding to the operation instructions, such as drawing, copying, pasting, deleting, dragging, jumping, and the like, may be set according to actual conditions, so as to implement a rendering flow corresponding to the operation instructions in the electronic whiteboard.
In the foregoing or following embodiments, further optionally, in order to ensure data synchronization of the electronic whiteboard even under conditions of weak network, network jitter, and the like, the communication layers of the first terminal device and the second terminal device may adopt an ordered and reliable communication layer design. The specific implementation mode is as follows:
first, synchronization processing is performed for a plurality of first terminal devices in parallel. Specifically, when multiple users operate the electronic whiteboard through different first terminal devices simultaneously, the overall operation sequence is limited by network transmission uncertainty, and therefore, a unique operation instruction sequence needs to be determined by the server as a whole. Therefore, the problem of overall order and reliability in the communication layer can be converted into the problem of end-to-end order and reliability, and the data synchronization difficulty of the communication layer is reduced. Taking the data synchronization scenario shown in fig. 4 as an example, the operation instructions (op1 and op2) uploaded by the respective first terminal devices may be sorted and merged by the server (server), so that the merged operation instructions may be used in broadcasting.
Secondly, in the communication layer, the serialized signaling object needs to be unpacked automatically. Specifically, the serialized signaling object may be broken into a set of 1k Packet format packets. In the data packet shown in fig. 5, the operation object may correspond to a plurality of packet header information indicating execution information of the operation instruction, such as an execution time, an instruction type, and the like. Specifically, in fig. 5, in each Packet-formatted data Packet, the following header information may be inserted: queue number (seq), acknowledgement number (ack), receive side window size (rwnd), upload timestamp (ts).
Third, the waiting policy may be stopped in the communication layer. Specifically, after the data packet with the preset format is sent, the confirmation signal replied by the second terminal device may be waited, and then the subsequent data packet is continuously sent to the second terminal device, so that the data packet transmission failure caused by the fact that the second terminal device cannot process in time is avoided.
Fourthly, retransmission strategies can be timed in a communication layer. Specifically, taking the packet synchronization scenario shown in fig. 6 as an example, if an acknowledgement signal (ack) fed back by the second terminal device is not received within the preset timeout retransmission time after the data packet is sent out, the data packet is triggered to be retransmitted. Here, the data packet may be retransmitted to some of the second terminal devices, such as the second terminal devices that do not feed back the acknowledgement signal, or may be retransmitted to all the second terminal devices. If the second terminal equipment receives the data packet which is repeatedly sent, the repeatedly received data packet can be ignored in a mode of reading the packet header information, and the processing efficiency is improved.
Because the electronic whiteboard has the requirements of small bandwidth and low delay, the requirements of reliability and transmission performance can be met by using a timing retransmission strategy so as to avoid the influence caused by redundant retransmission.
As can be seen from the data synchronization scenario shown in fig. 7, the time delay generated in the data synchronization process needs to be calculated. Specifically, a Round-Trip Time (RTT) needs to be calculated, where the Round-Trip Time refers to the Time required for data to pass from one end of the network to the other end. The RTT may be composed of four components, i.e., transmission delay, propagation delay, queuing delay, and processing delay. In practical applications, RTT is equal to the time from the time when the data packet is sent to the time when the ack is received. It is also necessary to calculate the average link delay for an overall link, i.e. SRTT ═ 0.875 × SRTT) +0.125 × RTT.
Specifically, a time-out Retransmission period (RTO) needs to be calculated. In short, RTO can be understood as the time it takes to start a retransmission timer after sending a packet. To calculate RTO, the variance RTT _ VAR ═ SRTT-RTT | also needs to be calculated. Because RTO is a retransmission cycle of a data packet, if ack is not received within a period of RTT + RTT _ VAR after retransmission of a packet, retransmission needs to be performed again, RTO ═ β (SRTT + RTT _ VAR), β is a coefficient introduced to solve the problem of high retransmission rate due to severe jitter in the network, and a value of 1.2 to 2.0 is usually adopted according to different transmission scenarios.
It should be noted that the retransmission timer is different from the timer for measuring RTT, and each window corresponds to a separate timer to calculate the RTO for retransmitting the window, and if the window is over, the window is retransmitted and the RTO is backed off (where RTO is 1.5 × RTO) so as to optimize the delay. And one timer for measuring RTT has only one part, if the timer is occupied by measuring RTT by one data packet, the data packet does not participate in the calculation of RTT when sending another data packet.
And fifthly, a sliding window is also arranged in the communication layer. The sending end device (i.e. the first terminal device or the server) is provided with a sending queue and a sending window, and the receiving end device (i.e. the second terminal device) is provided with a receiving queue and a receiving window. Taking the data synchronization scenario shown in fig. 8 as an example, when sending a data packet of an electronic whiteboard, the data packet is first filled into a sending queue, a sending window sends the data packet in the window in units of windows according to rules of stopping waiting, retransmitting at regular time, and delaying response, and the window is moved forward after receiving ack. The window set in the receiving end device strictly corresponds to the window set in the sending end device. Furthermore, the queue (seq) ordering of the signaling transmission queue needs to be guaranteed within the window, so that the receiving queue is filled up orderly and reliably.
Sixth, the window congestion mechanism may also be weak in the communication layer. In the invention, the data receiving and transmitting performance can be improved and the waiting time can be reduced through the sliding window. And the sliding window may also be used for subsequent flow control and congestion control. In particular, in the electronic whiteboard data synchronization process, in the case where the amount of transmitted data is small, it is necessary to ensure sufficiently small delay and reliability. Therefore, a fixed-size congestion window can be set for performing flow control and congestion control on the data synchronization process so as to adapt to the weak network environment. Here, the send window size is the fixed congestion window size.
And seventhly, if the continuous packet loss exceeds a certain number of times, the active disconnection reconnection is carried out. Specifically, the disconnection reconnection strategy mainly includes the following points:
1. logging in, adding the User ID (User ID, uid), the channel ID, the unique identification code (token) and other information returned by the service interface into a channel (such as a live broadcast room) corresponding to the electronic whiteboard;
2. pulling a historical operation sequence, and when the pulling is successful, clearing all current operation queues and all data of an electronic whiteboard (Board) by an operation instruction manager;
3. sequentially processing the historical operation queues to realize state recovery before disconnection;
4. and after the state is recovered, normally receiving and sending the signaling.
The design can realize an ordered and reliable communication layer, thereby providing a foundation for data synchronization between terminal devices.
In the foregoing or following embodiments, optionally, in 101, a native rendering interface used by the local service layer is determined according to a type to which the terminal device belongs; transmitting a geometric figure to be drawn in the initial image to a Graphic Processing Unit (GPU) according to the touch track; rendering a geometric figure in an initial texture mapping corresponding to the initial image through the GPU to obtain a target texture mapping containing the geometric figure.
In the steps, the touch tracks and the texture maps can be processed in the video memory space of the GPU through the native rendering interface in the local service layer, so that the complexity of the rendering step caused by data conversion between the GPU and the CPU is avoided, and the rendering efficiency is improved.
Specifically, in 101, optionally, by reading the attribute configuration of the terminal device, the type to which the terminal device belongs is determined, and then the native rendering interface that can be called by the local service layer is determined according to the type to which the terminal device belongs. For example, the native rendering interface corresponding to the android device includes an OpenGLES interface, and the native rendering interface corresponding to the IOS device includes a Metal interface, or native rendering interfaces corresponding to a Unity rendering engine or a cos rendering engine, respectively. And then, transmitting the touch track input by the user to a local service layer.
Furthermore, an optional implementation manner for transmitting the geometric figure to be drawn in the initial image to the GPU according to the touch trajectory is as follows: determining vertex data of the geometric figure according to the touch track and the configuration information of the electronic whiteboard; and inputting the vertex data into a preset rendering pipeline in the GPU.
Specifically, in an optional embodiment, a data type corresponding to the touch track may be determined, and in this embodiment, the data type corresponding to the touch track includes but is not limited to: one or a combination of a graffiti type, a wireframe (wireframe) type, a shape (shape) type. In practical application, the calculation mode of the vertex data is different for different types of touch tracks. For example, after receiving the touch trajectory input by the user, the front-end application of the terminal device parses the data type of the touch trajectory. Specifically, an android front-end application program can be adopted to analyze the touch trajectory input by the user and determine which data type the touch trajectory belongs to. In the analysis process, the data type of the touch trajectory may be determined based on parameters such as a start point, an end point, whether the start point and the end point coincide with each other, a trajectory length, and a stroke type of the touch trajectory.
It should be noted that the front-end application of the IOS, Unity rendering engine, or Cocos rendering engine may also determine the data type to which the touch trajectory input by the user belongs, and the determination is not expanded here.
And further, calculating the vertex data of at least one vertex according to the data type and the configuration information corresponding to the touch track. In this embodiment, the vertex data includes, but is not limited to, any one or combination of vertex positions, patch positions, UV coordinates of at least one vertex. Further, determining a setting algorithm corresponding to the data type; and calculating the vertex position, the patch position and the UV coordinate of at least one vertex by adopting a set algorithm according to the data type and the configuration information.
Furthermore, after determining vertex data of the geometric figure, inputting the vertex data into a preset rendering pipeline, comprising the following steps: determining a shader corresponding to the data type; and binding the corresponding brush texture for the vertex data, and inputting the vertex data and the brush texture into a rendering pipeline containing a shader.
In this embodiment, the Shader includes any one or a combination of a Vertex Shader (Vertex Shader), a Fragment Shader (Fragment Shader), and a Blend Mode processing module (Blend Mode). Wherein the vertex shader is part of a rendering pipeline. The vertex shader is divided into an input part and an output part, and the main function of the vertex shader is to convert the position of input data through a matrix so as to calculate parameters (namely vertex data) such as color of each vertex generated by an illumination formula, transformed texture coordinates and the like. And outputting parameters such as the color of each vertex, the transformed texture coordinates, and the like to a next processing unit (such as a fragment shader) in the rendering pipeline. The fragment shader is mainly used for processing the fragments obtained in the rasterization stage and finally calculating the color of each vertex in the fragments. That is, a data set containing the color component and the pixel transparency of each vertex is obtained.
For example, in the above step, it is determined whether the current vertex data corresponds to an eraser type, and if the current vertex data does not correspond to the eraser type, it is determined that the shaders to be used are the vertex shader and the fragment shader in sequence. If the current vertex data corresponds to the type of the eraser, determining that the shaders required to be used are a vertex shader, a fragment shader and a mixed mode processing module in sequence. And then, binding the corresponding brush texture for the current vertex data, and inputting the vertex data and the brush texture into a rendering pipeline containing the shader. If the fact that the user carries out batch dragging operation on the geometric figures corresponding to the touch tracks is detected, the positions of the geometric figures need to be changed. In this case, optionally, a translation transformation matrix corresponding to each geometry needs to be generated, and the translation transformation matrix needs to be transmitted into the corresponding rendering pipeline as well.
Furthermore, rendering a geometric figure in an initial texture map corresponding to the initial image by the GPU, so as to obtain a target texture map containing the geometric figure, in an optional implementation manner: and calling a native rendering interface in a rendering pipeline, and drawing at least one vertex corresponding to the vertex data in the initial texture map to obtain a target texture map containing the geometric figure.
For example, it is assumed that the type of the terminal device includes an android device or an IOS device. In the rendering pipeline shown in fig. 9, the vertex data is first transferred to an interface corresponding to a Metal interface or an OpenGLES interface through a Draw Call (Draw Call) for rendering, and is output to a display memory space specified in the GPU as a Texture map (Texture).
For an android device, it is assumed that a native rendering interface corresponding to the android device includes an OpenGLES interface. In the above step, invoking a native rendering interface in a rendering pipeline, and drawing at least one vertex corresponding to vertex data in an initial texture map to obtain a target texture map including a geometric figure, includes: inputting vertex data into a graphic drawing mode corresponding to an OpenGLES interface; under an OpenGLES interface, drawing vertex data into at least one corresponding vertex through a corresponding vertex shader and/or a fragment shader; and mixing at least one vertex and the initial texture map by a mixed mode processing module to obtain a target texture map.
For the IOS device, the way of obtaining the target texture map containing the geometry is similar to that of the android device, except that the native rendering interface corresponding to the IOS device is a Metal interface.
In addition, optionally, for the touch track of the eraser type, the corresponding blending mode needs to be modified, that is, the blending mode processing module is triggered to execute the corresponding blending process, so that the touch track of the type can be blended and covered on the designated area in the initial image through the template, thereby achieving the effect of erasing the designated area.
In this embodiment, the touch trajectory is processed by calling the native rendering interface, so that the rendering efficiency of the touch trajectory is higher, the low delay of the electronic whiteboard is ensured under the conditions of low-end equipment and a weak network, and the multi-end interaction effect of the electronic whiteboard is improved.
In the above or below embodiments, optionally, the computation logic may also be separated from the presentation logic to further improve rendering efficiency. Here, the calculation logic is logic for calculating position data of each touch trajectory, and the presentation logic is logic for rendering a page requested by a user. In short, the position data of each page included in the electronic whiteboard is calculated in real time, and the page requested by the user is displayed at the same time.
Specifically, it is assumed that the electronic whiteboard includes a plurality of pages, and each page is provided with a corresponding touch track buffer. Based on the above, the track identifier and the position information of the touch track can be obtained from the touch track buffer area corresponding to any page in response to the display instruction of any page; and outputting the track identifier and the position information of the touch track to the terminal equipment so that the terminal equipment renders a corresponding target image through the GPU.
In fact, taking the book reading scene as an example, each page in the book corresponds to one page in the electronic whiteboard. And storing the track identifier and the position information of the touch track in the current page in the touch track buffer zone corresponding to the page.
Based on the above, in response to a selection instruction of a user for one page, a corresponding page pointer is determined according to the instruction, and the track identifications and position information of all touch tracks are obtained from the touch track buffer area corresponding to the page through the page pointer. And then, inputting the track identifications and the position information of the touch tracks into corresponding rendering pipelines so that the terminal equipment renders corresponding target images through the GPU, and accordingly, a page turning instruction for the electronic whiteboard is achieved.
In this embodiment, by separating the computation logic and the presentation logic, when the page operated by the user is not the currently presented page, the computation of the position data in the page operated by the user may also be performed in the background synchronously. Therefore, in the paging data synchronization process, even if the page turning operation and the updated paging content are out of order, the display result of the paging displayed finally cannot be influenced.
In the above or below embodiments, after the geometric figures are drawn in the electronic whiteboard, optionally, the size, the angle, the position and/or the number of the geometric figures may also be adjusted through an operation manner preset in the electronic whiteboard or customized by a user.
Optionally, by dragging the stylus or directly moving the stylus, the size and angle of the geometric figure are changed under the condition that the position of the specified coordinate point in the geometric figure is kept unchanged, so that the geometric figure can be closer to the shape which the user really wants to draw.
Optionally, in response to a batch dragging instruction sent by a user, a geometric figure dragged by the user is selected from the identification list corresponding to the touch track, and a displacement distance matched with the geometric figure is recorded. Therefore, through the synchronization of the displacement distance, other terminals participating in data synchronization before and after dragging are not influenced by network delay.
Optionally, in response to a batch copy instruction or a paste instruction sent by a user, a geometric figure dragged by the user is selected from the identification list corresponding to the touch track, and a displacement distance matched with the geometric figure is recorded. Therefore, through the synchronization of the displacement distance, other terminals participating in data synchronization before and after dragging are not influenced by network delay.
Optionally, a binding relationship is established for one or more geometric figures, and an identifier corresponding to the binding relationship is stored in the touch track buffer area. Therefore, in response to a user's instruction for copying or pasting one or more geometric figures, the identification of one or more geometric figures is obtained, and the identification and the target position corresponding to the operation instruction are synchronized at each end, so that the operation of copying or pasting one or more geometric figures is realized.
In this embodiment, through the adjustment of the geometric figure, the functionality and the interactive operation mode of the electronic whiteboard are improved, and convenience is provided for electronic whiteboard users.
An image recognition apparatus according to one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these means can each be constructed using commercially available hardware components and by performing the steps taught in this disclosure.
Fig. 10 is a schematic structural diagram of a data synchronization apparatus of an electronic whiteboard according to an embodiment of the present invention, as shown in fig. 10, the apparatus includes: a rendering module 11, a generating module 12 and a synchronization module 13.
The rendering module 11 is configured to, in response to an operation instruction input in the electronic whiteboard displayed by the first terminal device, execute a rendering process corresponding to the operation instruction in the initial image to be processed through a rendering interface preset in the local service layer to obtain a target image, where the rendering interface is matched with a type to which the terminal device for displaying the electronic whiteboard belongs;
a generating module 12, configured to generate a signaling object including an operation object based on an operation instruction, where the operation object corresponds to the operation instruction;
and the synchronization module 13 is configured to transmit the signaling object and/or the target image to the second terminal device with the electronic whiteboard in synchronization.
Optionally, in the process that the synchronization module 13 transmits the target image to the second terminal device with the electronic whiteboard in synchronization, the synchronization module is specifically configured to:
acquiring an identifier of a target image in a Graphic Processing Unit (GPU);
and transmitting the identifier to the second terminal equipment so that the second terminal equipment renders the target image corresponding to the identifier through the GPU.
Optionally, in the process of generating the signaling object including the operation object based on the operation instruction, the generating module 12 is specifically configured to:
generating a corresponding operation object based on the operation instruction;
and inputting the operation object into a signaling layer, and encapsulating the operation object into the signaling object through the signaling layer.
Optionally, in the process that the synchronization module 13 synchronously transmits the signaling object to the second terminal device that displays the electronic whiteboard, the synchronization module is specifically configured to:
sending the signaling object to a signaling transmission queue;
serializing the signaling object into a data packet with a preset format in a signaling transmission queue, and uploading the data packet to a server;
and broadcasting the data packet from the first terminal equipment to the second terminal equipment by adopting the server.
Optionally, in the process that the synchronization module 13 broadcasts the data packet from the first terminal device to the second terminal device by using the server, the method is specifically configured to:
after receiving the signaling objects respectively uploaded by the first terminal devices, the server broadcasts the signaling objects to the second terminal devices in sequence from first to last according to the uploading time of the signaling objects.
Optionally, the synchronization module 13 disposed in the second terminal device is further configured to receive the signaling object, and parse the corresponding operation object from the signaling object through the operation manager;
and executing an operation instruction corresponding to the operation object in the electronic whiteboard.
Optionally, in a process that the synchronization module 13 disposed in the second terminal device receives the signaling object and parses the corresponding operation object from the signaling object through the operation manager, the synchronization module is specifically configured to:
the second terminal equipment receives a data packet with a preset format;
deserializing the data packets into signaling objects by the operation manager;
and analyzing the corresponding operation object from the signaling object.
Optionally, the electronic whiteboard includes a plurality of pages, and each page is provided with a corresponding undo stack and/or a recovery stack; the operation instruction comprises a withdrawal instruction and/or a reverse withdrawal instruction.
Based on this, the synchronization module 13 disposed in the second terminal device is specifically configured to, in the process of executing the operation instruction corresponding to the operation object in the electronic whiteboard:
based on a cancel instruction sent to any page in the electronic whiteboard, calling a cancel stack corresponding to any page through the operation manager to enable the electronic whiteboard to be switched to a state before the execution of the previous operation instruction; or
Based on a reverse cancellation instruction sent to any one of the pages in the electronic whiteboard, the operation manager calls a recovery stack corresponding to any one of the pages to enable the electronic whiteboard to be switched to a state before the last operation instruction is cancelled.
Optionally, the electronic whiteboard includes a plurality of pages, and each page is provided with a corresponding touch track buffer area; the operation instructions include rendering instructions.
Based on this, the synchronization module 13 disposed in the second terminal device is specifically configured to, in the process of executing the operation instruction corresponding to the operation object in the electronic whiteboard:
based on a rendering instruction sent to any one page in the electronic whiteboard, acquiring an identifier of a target image in a GPU from a touch track buffer area corresponding to any one page through an operation manager;
and rendering the target image corresponding to the identifier in the electronic whiteboard through the GPU.
The apparatus shown in fig. 10 can perform the steps described in the foregoing embodiments, and the detailed performing process and technical effects refer to the descriptions in the foregoing embodiments, which are not described herein again.
In one possible design, the structure of the data synchronization apparatus of the electronic whiteboard shown in fig. 10 may be implemented as an electronic device, as shown in fig. 11, where the electronic device may include: a processor 21, a memory 22, and a communication interface 23. Wherein the memory 22 has stored thereon executable code which, when executed by the processor 21, makes the processor 21 at least to implement the data synchronization method of the electronic whiteboard as provided in the previous embodiments.
In addition, an embodiment of the present invention provides a non-transitory machine-readable storage medium, which stores executable codes thereon, and when the executable codes are executed by a processor of an electronic device, the processor is enabled to implement at least the data synchronization method of the electronic whiteboard as provided in the foregoing embodiment.
The above-described apparatus embodiments are merely illustrative, wherein the units described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by an interface of a necessary general hardware platform, and of course, can also be implemented by an interface of a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A data synchronization method of an electronic whiteboard is characterized by comprising the following steps:
responding to an operation instruction input in an electronic whiteboard displayed by first terminal equipment, and executing a rendering process corresponding to the operation instruction in an initial image to be processed through a rendering interface preset in a local service layer to obtain a target image, wherein the rendering interface is matched with the type of the first terminal equipment;
generating a signaling object containing an operation object based on the operation instruction, wherein the operation object corresponds to the operation instruction;
and synchronously transmitting the signaling object and/or the target image to second terminal equipment which displays the electronic whiteboard.
2. The method according to claim 1, wherein the step of synchronously transmitting the target image to the second terminal device on which the electronic whiteboard is displayed comprises:
acquiring the identifier of the target image in a graphic processor;
and transmitting the identification to the second terminal equipment so as to enable the second terminal equipment to render the target image corresponding to the identification through the graphics processor.
3. The method of claim 1, wherein generating a signaling object containing an operand based on the operation instruction comprises:
generating a corresponding operation object based on the operation instruction;
inputting the operation object into a signaling layer, and encapsulating the operation object into the signaling object through the signaling layer.
4. The method according to claim 1, wherein the step of transmitting the signaling object synchronously to the second terminal device on which the electronic whiteboard is displayed comprises:
sending the signaling object to a signaling transmission queue;
serializing the signaling object into a data packet with a preset format in the signaling transmission queue, and uploading the data packet to a server;
broadcasting, with the server, the data packet from the first terminal device to the second terminal device.
5. The method of claim 4, wherein the broadcasting, with the server, the data packet from the first terminal device to the second terminal device comprises:
after receiving the signaling objects uploaded by the first terminal devices respectively, the server broadcasts the signaling objects to the second terminal devices in sequence from first to last according to the uploading time of the signaling objects.
6. The method of claim 1, further comprising:
the second terminal equipment receives the signaling object and analyzes the corresponding operation object from the signaling object through an operation manager;
and executing the operation instruction corresponding to the operation object in the electronic whiteboard.
7. The method of claim 6, wherein the second terminal device receives the signaling object and parses the corresponding operation object from the signaling object through an operation manager, and the method comprises:
the second terminal equipment receives a data packet with a preset format;
deserializing, by the operations manager, the data packet into the signaling object;
and analyzing the corresponding operation object from the signaling object.
8. The method of claim 6, wherein the electronic whiteboard comprises a plurality of pages, each page being provided with a corresponding undo stack and/or redo stack; the operation instruction comprises a withdrawal instruction and/or a reverse withdrawal instruction;
the executing the operation instruction corresponding to the operation object in the electronic whiteboard includes:
based on a cancel instruction sent to any one of the pages in the electronic whiteboard, calling a cancel stack corresponding to any one of the pages through an operation manager to enable the electronic whiteboard to be switched to a state before the execution of the previous operation instruction; or
Based on a reverse cancellation instruction sent to any one of the pages in the electronic whiteboard, calling a recovery stack corresponding to any one of the pages through an operation manager to enable the electronic whiteboard to be switched to a state before the last operation instruction is cancelled.
9. The method of claim 6, wherein the electronic whiteboard comprises a plurality of pages, each page being provided with a corresponding touch track buffer; the operating instructions comprise rendering instructions;
the executing the operation instruction corresponding to the operation object in the electronic whiteboard includes:
based on a rendering instruction sent to any one page in the electronic whiteboard, acquiring a mark of the target image in a graphics processor from a touch track buffer area corresponding to any one page through an operation manager;
rendering, by the graphics processor, the target image corresponding to the identifier in the electronic whiteboard.
10. A data synchronization apparatus of an electronic whiteboard, comprising:
the rendering module is used for responding to an operation instruction input in the electronic whiteboard displayed by the first terminal device, executing a rendering process corresponding to the operation instruction in the initial image to be processed through a rendering interface preset in a local service layer to obtain a target image, wherein the rendering interface is matched with the type of the terminal device used for displaying the electronic whiteboard;
a generating module, configured to generate a signaling object including an operation object based on the operation instruction, where the operation object corresponds to the operation instruction;
and the synchronization module is used for synchronously transmitting the signaling object and/or the target image to second terminal equipment which displays the electronic whiteboard.
11. A data synchronization apparatus of an electronic whiteboard, comprising: a memory, a processor, a communication interface; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the method of data synchronization of an electronic whiteboard according to any of claims 1 to 9.
12. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of data synchronization of an electronic whiteboard of any of claims 1 to 9.
CN202111510120.2A 2021-12-10 2021-12-10 Data synchronization method, device, equipment and storage medium of electronic whiteboard Pending CN114168098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111510120.2A CN114168098A (en) 2021-12-10 2021-12-10 Data synchronization method, device, equipment and storage medium of electronic whiteboard

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111510120.2A CN114168098A (en) 2021-12-10 2021-12-10 Data synchronization method, device, equipment and storage medium of electronic whiteboard

Publications (1)

Publication Number Publication Date
CN114168098A true CN114168098A (en) 2022-03-11

Family

ID=80485551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111510120.2A Pending CN114168098A (en) 2021-12-10 2021-12-10 Data synchronization method, device, equipment and storage medium of electronic whiteboard

Country Status (1)

Country Link
CN (1) CN114168098A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727142A (en) * 2022-03-30 2022-07-08 海信视像科技股份有限公司 Display device and collaborative drawing method
CN114727142B (en) * 2022-03-30 2024-05-31 海信视像科技股份有限公司 Display equipment and collaborative drawing method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015060592A (en) * 2014-09-04 2015-03-30 株式会社リコー Image processing system, and information processor
CN107678825A (en) * 2017-10-16 2018-02-09 青岛海信电器股份有限公司 A kind of rendering intent and electronic whiteboard applied to electronic whiteboard
CN109981711A (en) * 2017-12-28 2019-07-05 腾讯科技(深圳)有限公司 Document dynamic play method, apparatus, system and computer readable storage medium
CN111124333A (en) * 2019-12-05 2020-05-08 视联动力信息技术股份有限公司 Method, device, equipment and storage medium for synchronizing display contents of electronic whiteboard
CN111352539A (en) * 2018-12-24 2020-06-30 中移(杭州)信息技术有限公司 Method and device for terminal interaction
CN111459438A (en) * 2020-04-07 2020-07-28 苗圣全 System, method, terminal and server for synchronizing drawing content with multiple terminals
CN111949187A (en) * 2020-08-03 2020-11-17 深圳创维数字技术有限公司 Electronic whiteboard content editing and sharing method, system, equipment and server
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium
CN113282257A (en) * 2019-11-30 2021-08-20 北京城市网邻信息技术有限公司 Method, terminal device, device and readable storage medium for synchronous display
CN113419693A (en) * 2021-05-17 2021-09-21 广州佰锐网络科技有限公司 Multi-user track synchronous display method and system
CN113434106A (en) * 2021-08-30 2021-09-24 广州市保伦电子有限公司 Online electronic whiteboard content synchronous sharing system
CN114168060A (en) * 2021-12-10 2022-03-11 天津洪恩完美未来教育科技有限公司 Electronic whiteboard rendering method, device, equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015060592A (en) * 2014-09-04 2015-03-30 株式会社リコー Image processing system, and information processor
CN107678825A (en) * 2017-10-16 2018-02-09 青岛海信电器股份有限公司 A kind of rendering intent and electronic whiteboard applied to electronic whiteboard
CN109981711A (en) * 2017-12-28 2019-07-05 腾讯科技(深圳)有限公司 Document dynamic play method, apparatus, system and computer readable storage medium
CN111352539A (en) * 2018-12-24 2020-06-30 中移(杭州)信息技术有限公司 Method and device for terminal interaction
CN113282257A (en) * 2019-11-30 2021-08-20 北京城市网邻信息技术有限公司 Method, terminal device, device and readable storage medium for synchronous display
CN111124333A (en) * 2019-12-05 2020-05-08 视联动力信息技术股份有限公司 Method, device, equipment and storage medium for synchronizing display contents of electronic whiteboard
CN111459438A (en) * 2020-04-07 2020-07-28 苗圣全 System, method, terminal and server for synchronizing drawing content with multiple terminals
CN111949187A (en) * 2020-08-03 2020-11-17 深圳创维数字技术有限公司 Electronic whiteboard content editing and sharing method, system, equipment and server
CN112381918A (en) * 2020-12-03 2021-02-19 腾讯科技(深圳)有限公司 Image rendering method and device, computer equipment and storage medium
CN113419693A (en) * 2021-05-17 2021-09-21 广州佰锐网络科技有限公司 Multi-user track synchronous display method and system
CN113434106A (en) * 2021-08-30 2021-09-24 广州市保伦电子有限公司 Online electronic whiteboard content synchronous sharing system
CN114168060A (en) * 2021-12-10 2022-03-11 天津洪恩完美未来教育科技有限公司 Electronic whiteboard rendering method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727142A (en) * 2022-03-30 2022-07-08 海信视像科技股份有限公司 Display device and collaborative drawing method
CN114727142B (en) * 2022-03-30 2024-05-31 海信视像科技股份有限公司 Display equipment and collaborative drawing method

Similar Documents

Publication Publication Date Title
CN107168674B (en) Screen casting annotation method and system
US10250947B2 (en) Meeting system that interconnects group and personal devices across a network
US8117275B2 (en) Media fusion remote access system
US10965783B2 (en) Multimedia information sharing method, related apparatus, and system
CN111459438A (en) System, method, terminal and server for synchronizing drawing content with multiple terminals
CN113434106B (en) Online electronic whiteboard content synchronous sharing system
CN104932814A (en) Data transmission method and system and electronic terminal
CN107766024B (en) PPT projection control method and system based on splicing wall
EP2579588B1 (en) Collaborative meeting systems that enable parallel multi-user input to mark up screens
CN108335342B (en) Method, apparatus and computer program product for multi-person drawing on a web browser
CN105578110A (en) Video call method, device and system
CN112987915B (en) AST-based method applied to VR conference and whiteboard editing task
CN117044189A (en) Multi-user interactive board for improving video conference
CN105426148A (en) Multi-terminal-screen synchronization method based on vectorgraph
CN114168098A (en) Data synchronization method, device, equipment and storage medium of electronic whiteboard
CN114845136B (en) Video synthesis method, device, equipment and storage medium
CN114168060A (en) Electronic whiteboard rendering method, device, equipment and storage medium
CN114025147A (en) Data transmission method and system for VR teaching, electronic equipment and storage medium
CN111766998B (en) Data interaction method and device, electronic equipment and computer readable storage medium
CN111352539A (en) Method and device for terminal interaction
CN114615535A (en) Synchronous display method and device, electronic equipment and readable storage medium
CN103336649A (en) Feedback window image sharing method and device among terminals
JP2004325941A (en) Drawing processor, drawing method, drawing program, and electronic conference system provided with them
CN112214149A (en) Intelligent interactive display device, screen capture method and system thereof, and electronic equipment
CN104915100A (en) Multimedia interactive system and control method thereof based on network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination