CN112698759A - Labeling method and device and electronic equipment - Google Patents

Labeling method and device and electronic equipment Download PDF

Info

Publication number
CN112698759A
CN112698759A CN202011585513.5A CN202011585513A CN112698759A CN 112698759 A CN112698759 A CN 112698759A CN 202011585513 A CN202011585513 A CN 202011585513A CN 112698759 A CN112698759 A CN 112698759A
Authority
CN
China
Prior art keywords
conference
annotation
marking
position information
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011585513.5A
Other languages
Chinese (zh)
Other versions
CN112698759B (en
Inventor
夏正冬
刘王胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011585513.5A priority Critical patent/CN112698759B/en
Publication of CN112698759A publication Critical patent/CN112698759A/en
Application granted granted Critical
Publication of CN112698759B publication Critical patent/CN112698759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the disclosure discloses a labeling method, a labeling device and electronic equipment. One embodiment of the method comprises: acquiring operation position information of a conference marking operation, wherein the conference marking operation is a conference marking operation implemented by a first participant user in a multimedia conference on a shared content interface; and displaying a vector graph corresponding to the operation position information, wherein the vector graph is the conference marking content indicated by the conference marking operation. Therefore, a new conference labeling mode can be provided.

Description

Labeling method and device and electronic equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a labeling method and apparatus, and an electronic device.
Background
With the development of the internet, users use more and more functions of terminal equipment, so that work and life are more convenient. For example, a user may initiate a multimedia conference with other users online via a terminal device. The users can realize remote interaction through an online multimedia conference, and can also realize that the users can start the conference without gathering at one place. Multimedia conferences largely avoid the limitations of traditional face-to-face meetings with regard to location and place.
Disclosure of Invention
This disclosure is provided to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides an annotation method, where the method includes: acquiring operation position information of a conference marking operation, wherein the conference marking operation is a conference marking operation implemented by a first participant user in a multimedia conference on a shared content interface; and displaying a vector graph corresponding to the operation position information, wherein the vector graph is the conference marking content indicated by the conference marking operation.
In a second aspect, an embodiment of the present disclosure provides an annotation apparatus, including: the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring operation position information of a conference marking operation, and the conference marking operation is a conference marking operation implemented by a first participant user in a multimedia conference on a shared content interface; the display unit is used for acquiring operation position information of a conference marking operation, wherein the conference marking operation is a conference marking operation implemented by a first participant user in the multimedia conference on a shared content interface; and displaying a vector graph corresponding to the operation position information, wherein the vector graph is the conference marking content indicated by the conference marking operation.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the annotation method of the first aspect.
In a fourth aspect, the disclosed embodiments provide a computer readable medium, on which a computer program is stored, which when executed by a processor, implements the steps of the annotation method according to the first aspect.
According to the marking method, the marking device and the electronic equipment, the vector graphics are displayed through the operation position information based on the conference marking operation, and the displayed vector graphics can indicate the conference marking content. Therefore, when the conference annotation content is amplified on the screen, the definition of the conference annotation content is ensured, and the lossless amplification effect is realized.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a flow diagram of one embodiment of an annotation method according to the present disclosure;
FIG. 2 is a schematic structural diagram of one embodiment of a tagging device according to the present disclosure;
FIG. 3 is an exemplary system architecture to which the annotation methodology of one embodiment of the present disclosure may be applied;
fig. 4 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring to fig. 1, a flow diagram of one embodiment of an annotation method according to the present disclosure is shown. The labeling method is applied to terminal equipment. The labeling method shown in fig. 1 includes the following steps:
step 101, obtaining operation position information of a conference marking operation.
In this embodiment, an execution subject (for example, a terminal device) of the annotation method may obtain operation position information of the conference annotation operation.
In this embodiment, the conference tagging operation may be a tagging operation performed by a participant user of the multimedia conference, and the tagging operation may be performed on a shared content interface.
In this embodiment, the multimedia conference may be an online conference performed by using a multimedia method. The multimedia conference comprises at least one of the following: audio conferences, audio-video conferences. It can be understood that the audio-video conference means that, in the conference process, both audio interaction and video interaction exist. In some embodiments, the multimedia conference may be an audio-video conference.
In this embodiment, the application for starting the multimedia conference may be an application through which the service end can provide a conference service of the multimedia conference, and the type of the application may be various, which is not limited herein. For example, the application may be an instant video conference application, a messaging application, a video playing application, a mail application, and the like.
In this embodiment, the participating users of the multimedia conference may participate in the multimedia conference. After a multimedia conference is started, there are typically at least two users participating. It will be appreciated that when a multimedia conference is only started, there may be a situation where only one user joins, which will generally end quickly, because joining of a new user may result in at least two participating users, or in the case where no new user joins, the user who has joined the conference may end the conference. The scene of only one participating user is generally short, and the requirement of sharing content with other participating users does not generally appear, so that the embodiment of the application can be generally suitable for the application scene with at least two users in the multimedia conference.
In this embodiment, the user performing the local login of the main body may be referred to as a first participating user. It is to be understood that the first of the first participant users is for convenience of description and does not constitute a limitation on the labeling order or the meeting order, etc.
In this embodiment, the shared content interface may be an interface for displaying shared content. The shared content may be content shared to at least a portion of the participating users participating in the multimedia conference. The specific form of the shared content may be various, and is not limited herein. As an example, the shared content may include, but is not limited to, at least one of: sharing screens, sharing files. Shared files may include, but are not limited to: shared documents, shared images, etc.
In this embodiment, the first participating user may perform a tagging operation on the shared content interface. Generally, in the process of a multimedia conference, for shared content displayed on a shared content interface, a participating user may refer a certain part of content to other users to see, and in this scenario, the participating user may perform a conference marking operation.
In some application scenarios, the shared content interface may present an annotation toolbar that may present an annotation toolbar item, e.g., the annotation toolbar item may be an arrow, a circle, etc. The user can select the annotation tool item and then operate in the shared content interface. By way of example, through the annotation operation, the user may indicate content that the other user wishes to pay attention to, and may also annotate himself to remind himself to pay attention to certain content.
In some application scenarios, the conference marking operation may be all or part of the one-time marking behavior of the user. A tagging action may refer to a process from a user initiating a tagging operation to a user releasing the tagging operation. As an example, a user clicks a horizontal line control, then a finger touches the screen in the interface to draw a horizontal line, and after a section of the horizontal line is drawn, the finger leaves the screen; the fact that the finger starts to touch the screen and the finger leaves the screen can be understood as one-time annotation behavior; the horizontal line drawn by the finger from the beginning of touching the screen to the point where the finger leaves the screen can be understood as a piece of marked content.
For example, a piece of annotation content of the user takes longer, and in order to display the annotation content in real time, the piece of annotation content may use a smaller (smaller than a piece of annotation content) data packet collected in real time as a transmission unit along with the user operation, in this case, the annotation operation indicated by the smaller data packet may be a part of a complete annotation behavior of the user.
And 102, displaying a vector graph corresponding to the operation position information.
In this embodiment, the execution body may show a vector graphic corresponding to the operation position information.
Here, the vector graphics may be conference annotation contents indicated by a conference annotation operation.
In the present embodiment, the vector graphics may be an object composed of straight lines and curved lines. When filling color, coloring treatment may be performed along the contour line edge of the curve. The color of the vector graphics is irrelevant to the resolution, when the graphics are zoomed, the object can maintain the original definition and curvature, and the color and the shape can not deviate or deform.
In this embodiment, the vector graphics are generated based on the operation position information, and reference may be made to the technical principle of converting a bitmap into a vector graphics, which is not described herein again.
It should be noted that, in the annotation method provided in this embodiment, the vector graphics are displayed through the operation position information based on the conference annotation operation, and the displayed vector graphics can indicate the conference annotation content. Therefore, when the conference annotation content is amplified on the screen, the definition of the conference annotation content is ensured, and the lossless amplification effect is realized.
In some embodiments, the above method further comprises: based on the operation position information, a drawing parameter is generated.
Here, the execution subject may generate the drawing parameter based on the operation position information.
Here, the drawing parameter may indicate a vector graphic. In other words, the parameter for drawing may be used to draw the conference marking content indicated by the conference marking operation.
Optionally, the parameter for drawing may be used to draw the annotation content locally in the execution subject, and may also be sent to other electronic devices by the execution subject, and the other electronic devices draw the conference annotation content.
In this embodiment, the drawing parameters are generated based on the operation position information, and reference may be made to the technical principle of converting a bitmap into a vector diagram, which is not described herein again.
In addition, in the annotation method provided in this embodiment, the parameter for drawing indicating the vector graphic is generated based on the operation position information of the conference annotation operation, so that the content of the annotation drawn by using the parameter for drawing can be a vector graphic. Therefore, a more accurate vector graph can be generated; and as data indicating vector graphics, parameters for rendering are easy to transmit. Therefore, the definition of the conference annotation content can be ensured when the screen of the conference annotation content is enlarged; furthermore, a data base can be provided for transmitting the meeting annotation content.
In some embodiments, the parameters for rendering include graphical positioning information. Here, the graphical positioning information is used to indicate a location of the meeting annotation content in the shared content interface.
Here, data indicating a vector graphic in the drawing parameter may be independent of the screen position. The drawing parameter may include data for positioning the vector graphics to the shared content interface, that is, graphics positioning information.
In some embodiments, the method may further include: and sending the drawing parameters including the graphic positioning information to the second participant user.
Here, the second participating user may be another participating user than the first participating user in the multimedia conference. Here, the second participating user may be a user having access rights to the shared content.
The drawing parameters sent to the second participating user may include graphical positioning information. Thus, the position of the conference annotation content displayed by the second participant user is consistent with the position of the conference annotation content displayed by the first participant user. For example, the meeting annotation content of the first participating user is for sentence a, and the meeting annotation content displayed by the second participating user is also for sentence a. Therefore, the marking consistency of marking the shared content in the multimedia conference process can be realized, and the interactive accuracy of the multimedia conference is improved.
In some embodiments, the method may further include: and responding to the situation that the first participating user is a sharing party of the shared content, and respectively sending the drawing parameters and the shared content in the shared content interface to the second participating user.
Here, the first participating user may or may not be a sharing party for sharing the content. If the first participating user is the sharing party of the shared content, the first participating user needs to send the shared content to the second participating user so that the shared content can be presented in the client used by the second participating user.
Here, the second participating user may display shared content, and may display the conference annotation content on the shared content according to the drawing parameter.
Here, the second participating user may receive the drawing parameter and the shared content, respectively, and the shared content may include a video frame. It will be appreciated that the second participating user receives the drawing parameters and the shared content separately, and renders the shared content and the meeting annotation content separately.
Herein, the second participating user may refer to a client to which the second participating user logs in.
It should be noted that, in some related technologies, a bitmap image (i.e., annotation content) indicated by the operation position information may be incorporated into a video frame of the shared content, so as to generate a new composite video frame. The composite video frame is then transmitted to the other participating users. The resolution of the composite video frame is fixed and cannot be amplified without loss, so that the amplified scene has the problem of unclear images, and if the marked content is combined into the video frame of the shared content, the marked content is limited by the resolution of the video frame, so that the marked content is unclear after being amplified.
In contrast, in the annotation method provided in the embodiment of the present application, the parameter for drawing and the shared content are separately transmitted, and the parameter for drawing indicates the vector graphics, so that the conference annotation content drawn by the second participant user is not limited by the resolution of the transmitted shared content, and the display of the conference annotation content can maintain the definition in the enlarged scene, thereby implementing lossless enlargement of the conference annotation content.
In some embodiments, the parameters for rendering may include keypoint location information.
In some embodiments, the generating parameters for rendering based on the operation position information may include: fitting the operational position information to a Bezier curve, and generating keypoint position information indicating the Bezier curve.
In some embodiments, the parameter for rendering may include color data. The color data may indicate a color of the annotation content.
Here, the bezier curve, also called a bezz curve or a bezier curve, is a mathematical curve that can be applied to two-dimensional graph drawing. The Bezier curve can be composed of line segments and nodes, the nodes are draggable pivots, and the line segments can be intuitively understood as the telescopic rubber bands.
Here, the key point position information indicating the bezier curve can be obtained by fitting the operation position information to the bezier curve, that is, by using the curve indicated by the operation position information as the bezier curve, which corresponds to the shape of the bezier curve specified, and then determining the key point position indicating the shape.
It should be noted that, by generating the key point position information indicating the bezier curve, the data amount can be reduced, and thus, the data transmission amount when the information for rendering is sent can be reduced, and further, the data transmission efficiency can be improved, and the real-time performance of the display of the annotation data in the multimedia conference can be improved.
In some embodiments, the execution subject may display the conference annotation content according to the rendering parameter.
It should be noted that, according to the conference annotation content displayed by drawing with the drawing parameter, the definition can be ensured in the enlarged scene, and lossless enlargement can be realized.
In some embodiments, the meeting annotation content is displayed by a displaying step, wherein the displaying step includes: and the conference-selecting equipment is provided with graphic acceleration hardware, and an open graphic library is adopted to process the drawing parameters so as to display the conference labeling content.
Here, the participating devices may include terminal devices used by the users to participate in the multimedia conference. It will be appreciated that the terminal devices participating in the multimedia conference may be configured differently.
When the participating device acquires the parameters for drawing and uses the parameters for drawing, different processing methods can be adopted according to whether the participating device is equipped with a graphics accelerator or not so as to display the conference annotation content.
Here, the displaying step may be performed by the first participating user or the second participating user.
Here, the graphics acceleration hardware, which is generally in the form of a graphics acceleration card, is an image adapter card that performs graphics operations exclusively in a chip-integrated manner. Since the graphic acceleration card is faster than the cpu in calculating graphics, an electronic device equipped with the graphic acceleration card is excellent in image processing.
Here, an Open Graphics Library (OpenGL), a cross-language, cross-platform Application Programming Interface (API) that may be used to render 2D, 3D vector Graphics.
As an example, when drawing with an open graphics library, a background drawing thread (for drawing the stored drawing parameters to glsurface view) converts the drawing parameters into vertex data of a plurality of triangles (for example, converting a straight line into two right-angled triangles and converting a circle into a plurality of isosceles triangles), then inputs the vertex data (including information of position, color, and the like) into an OpenGL drawing buffer, and finally calls an OpenGL API to draw all the triangles.
Alternatively, if the conferencing device is not equipped with graphics acceleration hardware, a Canvas control (Canvas) may be used to draw. When Canvas is used for drawing, the background drawing thread can convert the drawing parameters into Canvas drawing operations, such as drawLine, drawOval and the like, and then directly draw the target graph on the Canvas.
In some embodiments, the step 102 may include: generating a target marking behavior unit according to the operation position information of the conference marking operation; adding the target marking behavior unit into a marking behavior queue; and displaying a vector graph corresponding to the operation position information based on the marked behavior queue.
Here, the annotation behavior unit may characterize parameters for drawing corresponding to a conference annotation operation from the initiation of the annotation operation to the release of the annotation operation.
Here, the annotated behavior queue may be generated according to the locally generated annotated behavior unit and the received timestamp of the annotated behavior unit.
Here, if the user touches the screen to start drawing (i.e., initiates a callout action), the executing body may detect a touch event and then generate a series of original operation points according to the touch time. With the progress of the labeling action, a part of original data generated by the labeling action can be converted into parameters for drawing in real time, and then the parameters for drawing (marked with an action label to indicate which meeting labeling unit belongs to) based on the part of the labeling action are drawn locally in real time and sent to the second meeting user for displaying.
Here, a persistent annotation action can be categorized as a unified conference annotation unit until the user stops touching the screen (i.e., releases the annotation action). The meeting annotation unit can be intuitively and visually understood as a pen, for example, after the user selects the arrow tool item, the action from starting to stopping touching the screen can be regarded as that the user draws an arrow and completes a pen annotation. This annotation can be used as a meeting annotation unit. The drawing parameters corresponding to the conference labeling unit can be understood as a labeling behavior unit (i.e. a data set).
Here, the tagged behavior units in the tagged behavior unit queue are sorted according to the time stamp precedence order.
Optionally, the timestamp may indicate the time when the annotation action starts, or may indicate the time when the annotation action ends.
Optionally, the order of each annotation behavior unit in the annotation behavior unit queue may be determined by the time when the user touches the screen to start drawing (i.e., start the annotation action).
Optionally, a time label may be marked on the drawing parameter in the tagged behavior unit, and then the order of each tagged behavior unit in the tagged behavior unit queue is determined according to the time label.
Here, the locally generated annotation behavior unit and the annotation behavior unit transmitted from the second participant user may indicate the annotation behavior of each participant user in the multimedia conference. The formed marking behavior units are arranged according to the time sequence, so that each marking behavior in the multimedia conference can be orderly indicated.
It should be noted that, the determination of the marked behavior unit queue can facilitate the user to change the marked content by taking the marked behavior unit as a unit.
In some embodiments, the method further comprises: and in response to the detection of the predefined annotation withdrawing operation, deleting the annotation content indicated by the annotation behavior unit positioned at the tail of the queue in the annotation behavior unit queue in the shared content interface.
Here, a annotation retraction toolbar may be provided in the annotation toolbar. Then, in response to the operation of the user on the annotation withdrawing tool item, the conference annotation content is deleted, namely the withdrawing annotation content. Withdrawing the annotation content can be understood as ceasing to display the annotation content.
Here, the withdrawing may be withdrawing the latest annotation content, that is, the annotation content indicated by the annotation behavior unit located at the end of the queue in the annotation behavior unit queue.
It should be noted that, the annotation content is managed according to the annotation behavior unit queue, so that the user can withdraw the annotation behavior according to one annotation as a unit, the user can conveniently change the made annotation, and the interaction efficiency in the multimedia conference process is improved.
In some embodiments, the method further comprises: and sending a withdrawing operation request generated based on the withdrawing operation to a second participating user, and deleting (or withdrawing) the marked content indicated by the withdrawing operation request by the second participating user.
Here, the first participant object may perform a tag withdrawal operation, and then the execution subject may generate a tag withdrawal request according to the tag withdrawal operation. Then, the executing entity may send the annotation withdrawing request to the second participating user, and the second participating user may withdraw the annotation content corresponding to the annotation withdrawing request.
Optionally, the annotation withdrawing request may include indication information of the annotation to be withdrawn. The annotation to be recalled indication information can be used for indicating annotation content to be recalled. The second participant user receives the annotation withdrawing request, and can withdraw the annotation content indicated by the indication information to be annotated in the annotation withdrawing request.
Optionally, the annotation retraction request is retracted from the end of the queue according to a predetermined convention. And the second participant user can withdraw the annotation content indicated by the annotation behavior unit positioned at the tail of the queue in the annotation behavior unit queue every time the second participant user receives the annotation withdrawing request.
It should be noted that, by withdrawing the operation request, the display of the labeled content can be synchronized between the participants in time, and it is ensured that the same labeled content is displayed between the participants as much as possible.
In some embodiments, the generating parameters for rendering based on the operation position information may include: and generating drawing parameters according to the standard resolution of the multimedia conference. In other words, the resolution on which the parameter for rendering is based is the reference resolution of the multimedia conference.
Here, the reference resolution of the multimedia conference may be specified in advance. The specified rule may be set according to actual conditions, and is not limited herein.
As an example, the base resolution may be 1920 ANG 1080, or may be 3840 ANG 2160, or the like.
In some application scenarios, if the screen resolution of the execution subject may be a pre-specified reference resolution, the rendering parameters at the reference resolution may be obtained based on the conversion of the rendering parameters normalized to the reference resolution.
It should be noted that the parameter for drawing is based on the reference resolution, and when the parameter for drawing is transmitted to the second participating user, the second participating user can conveniently determine in advance a conversion mode from display at the resolution of the second participating user to display at the reference resolution, and therefore, the second participating user can complete conversion quickly after taking the parameter for drawing, thereby realizing quick display.
In some embodiments, the step 101 may include: and converting the original operation position information of the agenda annotation operation into the operation position information under the reference resolution according to the home resolution and the reference resolution.
In some application scenarios, the execution subject may perform conversion of the original operation position information normalized to the reference resolution, and then determine the drawing parameters at the reference resolution according to the operation position information obtained by the conversion.
In some embodiments, the base resolution is a screen resolution of the sharing initiating user.
In some embodiments, the method further comprises: and responding to the starting sharing of the sharing initiating user, and sending the reference resolution to the participating users of the multimedia conference.
Here, by using the screen resolution of the sharing initiator as the reference resolution, the time for the sharing initiator to switch from another resolution to the reference resolution can be saved, and the speed of sending out the drawing parameters can be increased. Generally, the sharing initiating user can be used as a main sharer of the multimedia conference, and the main sharer usually performs more explanation and annotation, that is, more annotation content is sent from the sharing initiating user end; therefore, the conversion from the sharing initiating user end to the reference resolution is avoided, the calculated amount and the communication traffic of the sharing initiating user end can be reduced, the marked content can be ensured to be carried out smoothly, and the data indicating the marked content can be sent to other participating ends as soon as possible.
In some embodiments, the reference resolution is determined from screen resolutions of participants using personal computers as participant devices.
In some application scenarios, the size of the callout layer of a Personal Computer (PC) is not consistent with the size of the callout layer of the operating system of the mobile terminal. If a PC is present as the participating device, the screen resolution of the PC may be used as the reference resolution.
As an example, in response to detecting that the multimedia conference has a PC as a participant device, each participant may be informed of the resolution of the PC (e.g., 1920 × 1080), and the parameters for rendering transmitted between subsequent participants are position information of this size. For example, the resolution of the annotation layer under the mobile device a system is 3840 × 2160, the operation position information can be converted into coordinates based on 1920 × 1080 resolution, and the drawing parameters are also based on 1920 × 1080 coordinates. When rendering is performed by a mobile device equipped with the operating system a, all data under 1920 × 1080 needs to be converted into resolution coordinates under the system a of the mobile device, and then rendered.
It should be noted that, by using the screen resolution of the PC as the reference resolution, the probability that the multimedia conference needs to be converted can be reduced. In other words, if there is no PC as a participating device, the multimedia conference does not need to perform conversion for screen resolution. In some application scenarios, the situation that the PC is taken as a conference object is possibly less, and the calculation rate for converting the screen resolution can be saved with a higher probability, so that the calculation amount of the multimedia conference can be reduced as much as possible, and the smooth operation of the multimedia conference is ensured.
In some embodiments, the step of determining the reference resolution may comprise: determining the meeting object with the earliest meeting time from the meeting objects taking a personal computer as the meeting equipment; determining the determined screen resolution of the participant as the reference resolution.
Optionally, the conference objects taking the personal computers as the conference devices can be determined from the existing conference objects of the conference; then, from these participants, the participant with the earliest participation time is determined. The participant with the earliest participating time (taking the personal computer as the participating device) can be used as the participant for determining the reference resolution.
Or, a conference object queue using a personal computer as a conference device can be set in the process of the multimedia conference, and the conference object queue is generated according to the conference time. If quitting, deleting the quitted participant object from the queue; if the queue exits and then is added, the queue tail is added. Therefore, the meeting object at the head of the team can be ensured to be the earliest meeting object which takes a personal computer as a meeting device in the multimedia meeting.
It should be noted that the participant whose meeting time is the earliest (with the personal computer as the meeting device) may be the participant whose reference resolution is determined, and when the participant whose meeting time is the earliest is the participant with the personal computer as the meeting device in the multimedia conference, the screen resolution of the PC is set as the reference resolution as much as possible regardless of the change of the participant. Therefore, the calculation amount of the multimedia conference can be reduced as much as possible, and the smooth proceeding of the multimedia conference is guaranteed.
In some embodiments, the method further comprises: displaying at least one labeled graph control, wherein each labeled graph control in the at least one labeled graph control has a one-to-one correspondence relationship with each graph generation method; and in response to the detection of the triggering operation aiming at the marked graph control, calling a graph generating method corresponding to the triggered marked graph control.
By way of example, the annotation graphical control can be used to render annotation content in a variety of annotation styles. For example, the illustrated annotation graphical control can include, but is not limited to, implementing at least one of: drawing arrows, drawing circles, drawing straight lines, drawing rectangles, and the like.
Here, the graphic generation method may be used to draw a specific graphic. For example, triggering an arrow graphic control can invoke a method for drawing an arrow, and drawing the arrow according to the starting position and the ending position of the user operation; this arrow may serve as annotation content.
It should be noted that, the one-to-one correspondence relationship is set between the labeled graphic control and the image generation method, and when a new graphic control is added, the newly added corresponding image generation method is adopted and the correspondence relationship is established, so that the expansion of the type of the labeled graphic can be quickly realized. Therefore, the workload of expanding the annotation graph can be reduced, and the difficulty of expanding the annotation graph can be reduced.
In some application scenarios, the step of processing the operation position information to obtain the drawing parameters, the step of sending the drawing parameters to the second participant user, the step of determining the marked behavior unit queue, and the like may be packaged into a Software Development Kit (SDK). Therefore, the function of the annotation processing can be relatively separated from the client side for carrying out the multimedia conference, the annotation processing can be conveniently and rapidly carried out, and the real-time performance of the annotation processing and the display can be ensured.
With further reference to fig. 2, as an implementation of the method shown in the above-mentioned figures, the present disclosure provides an embodiment of a labeling apparatus, which corresponds to the embodiment of the method shown in fig. 1, and which can be applied to various electronic devices.
As shown in fig. 2, the labeling device of the present embodiment includes: an acquisition unit 201 and a presentation unit 202. The system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring operation position information of a conference marking operation, wherein the conference marking operation is a conference marking operation implemented by a first participant user in a multimedia conference on a shared content interface; and the display unit is used for displaying the vector graphics corresponding to the operation position information, wherein the vector graphics are the conference marking content indicated by the conference marking operation.
In this embodiment, the detailed processing of the obtaining unit 201 and the displaying unit 202 of the annotation device and the technical effects thereof can refer to the related descriptions of step 101 and step 102 in the corresponding embodiment of fig. 1, which are not repeated herein.
In some embodiments, the apparatus is further configured to: generating a parameter for drawing based on the operation position information, wherein the parameter for drawing indicates the vector graphics.
In some embodiments, the parameters for rendering include graphical positioning information, wherein the graphical positioning information is used to indicate a location of the meeting annotation content in the shared content interface; and the apparatus is further configured to: and sending the drawing parameters including the graphic positioning information to a second participating user, wherein the second participating user can be a user with access right of shared content in the multimedia conference.
In some embodiments, sending the drawing parameters including the graphical positioning information to a second participant user comprises: and in response to the fact that the first participant user is determined to be a sharing party of the shared content, respectively sending parameters for drawing and the shared content in the shared content interface to a second participant object, wherein the second participant object displays the shared content, and displays the conference marking content on the shared content according to the parameters for drawing.
In some embodiments, the parameters for rendering include keypoint location information; and generating a parameter for drawing based on the operation position information, including: fitting the operational position information to a Bezier curve, and generating keypoint position information indicating the Bezier curve.
In some embodiments, the apparatus is further configured to: and displaying the conference marking content according to the drawing parameters.
In some embodiments, the meeting annotation content is displayed by a displaying step, wherein the displaying step includes: and the conference-selecting equipment is provided with graphic acceleration hardware, and an open graphic library is adopted to process the drawing parameters so as to display the conference labeling content.
In some embodiments, the presenting a vector graphic corresponding to the operation position information includes: generating a target marking behavior unit according to the operation position information of the conference marking operation, wherein the marking behavior unit is a drawing parameter corresponding to the conference marking operation from the starting of a marking action to the releasing of the marking action; adding a target marked behavior unit into a marked behavior queue, wherein the marked behavior unit queue is generated according to a locally generated marked behavior unit and a received timestamp of the marked behavior unit; and displaying the vector graphics corresponding to the operation position information based on the marked behavior queue.
In some embodiments, the apparatus is further configured to: and in response to the detection of the predefined annotation withdrawing operation, deleting the annotation content indicated by the annotation behavior unit positioned at the tail of the queue in the annotation behavior unit queue in the shared content interface.
In some embodiments, the apparatus is further configured to: and sending a withdrawing operation request generated based on the withdrawing operation to a second participating user so that the second participating user deletes the marked content indicated by the withdrawing operation request.
In some embodiments, the generating parameters for rendering based on the operation position information includes: and generating drawing parameters according to the reference resolution of the multimedia conference.
In some embodiments, the base resolution is a screen resolution of a sharing initiating user; and the apparatus is further configured to: and responding to the starting sharing of the sharing initiating user, and sending the reference resolution to the participating users of the multimedia conference.
In some embodiments, the base resolution is determined from screen resolutions of participants using personal computers as participant devices.
In some embodiments, the determining of the base resolution comprises: determining the meeting object with the earliest meeting time from the meeting objects taking a personal computer as the meeting equipment; and determining the screen resolution of the determined participant object as the reference resolution.
In some embodiments, the apparatus is further configured to: displaying at least one labeled graph control, wherein each labeled graph control in the at least one labeled graph control has a one-to-one correspondence relationship with each graph generation method; and in response to the detection of the triggering operation aiming at the marked graph control, calling a graph generating method corresponding to the triggered marked graph control.
Referring to fig. 3, fig. 3 illustrates an exemplary system architecture to which the annotation methodology of one embodiment of the present disclosure may be applied.
As shown in fig. 3, the system architecture may include terminal devices 301, 302, 303, a network 304, and a server 305. The network 304 serves as a medium for providing communication links between the terminal devices 301, 302, 303 and the server 305. Network 304 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 301, 302, 303 may interact with a server 305 over a network 304 to receive or send messages or the like. The terminal devices 301, 302, 303 may have various client applications installed thereon, such as a web browser application, a search-type application, a news-information-type application. The client application in the terminal device 301, 302, 303 may receive the instruction of the user, and complete the corresponding function according to the instruction of the user, for example, add the corresponding information to the information according to the instruction of the user.
The terminal devices 301, 302, 303 may be hardware or software. When the terminal devices 301, 302, 303 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal device 301, 302, 303 is software, it can be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 305 may be a server providing various services, for example, receiving an information acquisition request sent by the terminal devices 301, 302, 303, and acquiring the presentation information corresponding to the information acquisition request in various ways according to the information acquisition request. And the relevant data of the presentation information is sent to the terminal devices 301, 302, 303.
It should be noted that the annotation method provided by the embodiment of the present disclosure may be executed by a terminal device, and accordingly, the annotation device may be disposed in the terminal device 301, 302, 303. In addition, the annotation method provided by the embodiment of the present disclosure may also be executed by the server 305, and accordingly, the annotation device may be disposed in the server 305.
It should be understood that the number of terminal devices, networks, and servers in fig. 3 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 4, shown is a schematic diagram of an electronic device (e.g., a terminal device or a server of fig. 3) suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 illustrates an electronic device having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (hypertext transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring operation position information of a conference marking operation, wherein the conference marking operation is a conference marking operation implemented by a first participant user in a multimedia conference on a shared content interface; and generating a parameter for drawing based on the operation position information, wherein the parameter for drawing indicates a vector graph, and the parameter for drawing is used for drawing the conference marking content indicated by the conference marking operation.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Here, the name of the unit does not constitute a limitation of the unit itself in some cases, and for example, the acquisition unit may also be described as a "unit that acquires operation position information".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (18)

1. A method of labeling, comprising:
acquiring operation position information of a conference marking operation, wherein the conference marking operation is a conference marking operation implemented by a first participant user in a multimedia conference on a shared content interface;
and displaying a vector graph corresponding to the operation position information, wherein the vector graph is the conference marking content indicated by the conference marking operation.
2. The method of claim 1, further comprising:
generating a parameter for drawing based on the operation position information, wherein the parameter for drawing indicates the vector graphics.
3. The method of claim 2, wherein the parameters for rendering comprise graphical positioning information, wherein the graphical positioning information is used to indicate a location of the meeting annotation content in the shared content interface; and
the method further comprises the following steps:
and sending the drawing parameters including the graphic positioning information to a second participant user, wherein the second participant user is a user with access right of shared content in the multimedia conference.
4. The method of claim 3, wherein sending the drawing parameters including graphical positioning information to a second participant user comprises:
in response to determining that the first participant user is a sharing party of shared content, sending the drawing parameters and the shared content in the shared content interface to a second participant object respectively, wherein,
and the second participant object displays shared content, and displays the conference annotation content on the shared content according to the drawing parameters.
5. The method according to claim 2, wherein the parameters for rendering include keypoint location information; and
the generating of the drawing parameter based on the operation position information includes:
fitting the operational position information to a Bezier curve, and generating keypoint position information indicating the Bezier curve.
6. The method of claim 2, further comprising:
and displaying the conference marking content according to the drawing parameters.
7. The method of claim 2, wherein the meeting annotation content is displayed via a displaying step, wherein the displaying step comprises:
and the conference-selecting equipment is provided with graphic acceleration hardware, and an open graphic library is adopted to process the drawing parameters so as to display the conference labeling content.
8. The method according to claim 1, wherein the presenting the vector graphics corresponding to the operation position information comprises:
generating a target marking behavior unit according to the operation position information of the conference marking operation, wherein the marking behavior unit is a drawing parameter corresponding to the conference marking operation from the starting of a marking action to the releasing of the marking action;
adding a target marked behavior unit into a marked behavior queue, wherein the marked behavior unit queue is generated according to a locally generated marked behavior unit and a received timestamp of the marked behavior unit;
and displaying the vector graphics corresponding to the operation position information based on the marked behavior queue.
9. The method of claim 8, further comprising:
and in response to the detection of the predefined annotation withdrawing operation, deleting the annotation content indicated by the annotation behavior unit positioned at the tail of the queue in the annotation behavior unit queue in the shared content interface.
10. The method of claim 9, further comprising:
and sending a withdrawing operation request generated based on the withdrawing operation to a second participating user so that the second participating user deletes the marked content indicated by the withdrawing operation request.
11. The method according to claim 2, wherein the generating parameters for rendering based on the operation position information includes:
and generating drawing parameters according to the reference resolution of the multimedia conference.
12. The method of claim 11, wherein the base resolution is a screen resolution of a sharing initiating user; and
the method further comprises the following steps:
and responding to the starting sharing of the sharing initiating user, and sending the reference resolution to the participating users of the multimedia conference.
13. The method of claim 11, wherein the base resolution is determined from screen resolutions of participants using personal computers as the participant devices.
14. The method of claim 13, wherein the step of determining the reference resolution comprises:
determining the meeting object with the earliest meeting time from the meeting objects taking a personal computer as the meeting equipment;
and determining the screen resolution of the determined participant object as the reference resolution.
15. The method of claim 1, further comprising:
displaying at least one labeled graph control, wherein each labeled graph control in the at least one labeled graph control has a one-to-one correspondence relationship with each graph generation method;
and in response to the detection of the triggering operation aiming at the marked graph control, calling a graph generating method corresponding to the triggered marked graph control.
16. A marking device, comprising:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring operation position information of a conference marking operation, and the conference marking operation is a conference marking operation implemented by a first participant user in a multimedia conference on a shared content interface;
and the display unit is used for displaying the vector graphics corresponding to the operation position information, wherein the vector graphics are the conference marking content indicated by the conference marking operation.
17. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-15.
18. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-15.
CN202011585513.5A 2020-12-28 2020-12-28 Labeling method and device and electronic equipment Active CN112698759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011585513.5A CN112698759B (en) 2020-12-28 2020-12-28 Labeling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011585513.5A CN112698759B (en) 2020-12-28 2020-12-28 Labeling method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112698759A true CN112698759A (en) 2021-04-23
CN112698759B CN112698759B (en) 2023-04-21

Family

ID=75511350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011585513.5A Active CN112698759B (en) 2020-12-28 2020-12-28 Labeling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112698759B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489910A (en) * 2022-02-10 2022-05-13 北京字跳网络技术有限公司 Video conference data display method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070083597A1 (en) * 1996-03-26 2007-04-12 Pixion, Inc. Presenting images in a conference system
CN101370115A (en) * 2008-10-20 2009-02-18 深圳华为通信技术有限公司 Conference terminal, conference server, conference system and data processing method
CN104166514A (en) * 2013-05-17 2014-11-26 株式会社理光 Information processing apparatus, information processing system, and information display method
CN109348161A (en) * 2018-09-21 2019-02-15 联想(北京)有限公司 Show markup information method and electronic equipment
CN110727361A (en) * 2019-09-30 2020-01-24 厦门亿联网络技术股份有限公司 Information interaction method, interaction system and application
CN111427528A (en) * 2020-03-20 2020-07-17 北京字节跳动网络技术有限公司 Display method and device and electronic equipment
CN111459438A (en) * 2020-04-07 2020-07-28 苗圣全 System, method, terminal and server for synchronizing drawing content with multiple terminals

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070083597A1 (en) * 1996-03-26 2007-04-12 Pixion, Inc. Presenting images in a conference system
CN101370115A (en) * 2008-10-20 2009-02-18 深圳华为通信技术有限公司 Conference terminal, conference server, conference system and data processing method
CN101572794A (en) * 2008-10-20 2009-11-04 深圳华为通信技术有限公司 Conference terminal, conference server, conference system and data processing method
CN102291562A (en) * 2008-10-20 2011-12-21 华为终端有限公司 Conference terminal, conference server, conference system and data processing method
CN104166514A (en) * 2013-05-17 2014-11-26 株式会社理光 Information processing apparatus, information processing system, and information display method
CN109348161A (en) * 2018-09-21 2019-02-15 联想(北京)有限公司 Show markup information method and electronic equipment
CN110727361A (en) * 2019-09-30 2020-01-24 厦门亿联网络技术股份有限公司 Information interaction method, interaction system and application
CN111427528A (en) * 2020-03-20 2020-07-17 北京字节跳动网络技术有限公司 Display method and device and electronic equipment
CN111459438A (en) * 2020-04-07 2020-07-28 苗圣全 System, method, terminal and server for synchronizing drawing content with multiple terminals

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489910A (en) * 2022-02-10 2022-05-13 北京字跳网络技术有限公司 Video conference data display method, device, equipment and medium

Also Published As

Publication number Publication date
CN112698759B (en) 2023-04-21

Similar Documents

Publication Publication Date Title
US10771933B1 (en) Simplified message grouping and display
US10382719B2 (en) Method and apparatus for sharing information during video call
CN111756917B (en) Information interaction method, electronic device and computer readable medium
CN112398727B (en) Information processing method, device, terminal and storage medium
EP4262214A1 (en) Screen projection method and apparatus, and electronic device and storage medium
US11968427B2 (en) Video message generation method and apparatus, electronic device, and storage medium
WO2021012952A1 (en) Message processing method, device and electronic equipment
CN113591439B (en) Information interaction method and device, electronic equipment and storage medium
CN111162993B (en) Information fusion method and device
CN112311656B (en) Message aggregation and display method and device, electronic equipment and computer readable medium
CN111597467A (en) Display method and device and electronic equipment
CN111427528A (en) Display method and device and electronic equipment
CN112035030A (en) Information display method and device and electronic equipment
CN114064593B (en) Document sharing method, device, equipment and medium
CN115987934A (en) Information processing method and device and electronic equipment
CN111596995A (en) Display method and device and electronic equipment
CN112698759B (en) Labeling method and device and electronic equipment
CN111385599B (en) Video processing method and device
CN114417782A (en) Display method and device and electronic equipment
CN111597414B (en) Display method and device and electronic equipment
CN112307394A (en) Information display method and device and electronic equipment
CN111696214A (en) House display method and device and electronic equipment
CN112306976A (en) Information processing method and device and electronic equipment
US20230410394A1 (en) Image display method and apparatus, device, and medium
CN115314456B (en) Interaction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant