CN112698759B - Labeling method and device and electronic equipment - Google Patents

Labeling method and device and electronic equipment Download PDF

Info

Publication number
CN112698759B
CN112698759B CN202011585513.5A CN202011585513A CN112698759B CN 112698759 B CN112698759 B CN 112698759B CN 202011585513 A CN202011585513 A CN 202011585513A CN 112698759 B CN112698759 B CN 112698759B
Authority
CN
China
Prior art keywords
conference
annotation
position information
participant
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011585513.5A
Other languages
Chinese (zh)
Other versions
CN112698759A (en
Inventor
夏正冬
刘王胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011585513.5A priority Critical patent/CN112698759B/en
Publication of CN112698759A publication Critical patent/CN112698759A/en
Application granted granted Critical
Publication of CN112698759B publication Critical patent/CN112698759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the invention discloses a labeling method, a labeling device and electronic equipment. One embodiment of the method comprises the following steps: acquiring operation position information of a conference marking operation, wherein the conference marking operation is performed by a first participant in a multimedia conference on a shared content interface; and displaying a vector graph corresponding to the operation position information, wherein the vector graph is conference annotation content indicated by the conference annotation operation. Thus, a new conference labeling mode can be provided.

Description

Labeling method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to a labeling method, a labeling device and electronic equipment.
Background
With the development of the internet, users increasingly use functions of terminal devices, so that work and life are more convenient. For example, a user may initiate a multimedia conference with other users online via a terminal device. The users can realize remote interaction through online multimedia conferences, and can also realize that the users can start conferences without being integrated at one place. Multimedia conferences largely avoid the limitations of traditional face-to-face conferences with respect to location and place.
Disclosure of Invention
This disclosure is provided in part to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides a labeling method, including: acquiring operation position information of a conference marking operation, wherein the conference marking operation is performed by a first participant in a multimedia conference on a shared content interface; and displaying a vector graph corresponding to the operation position information, wherein the vector graph is conference annotation content indicated by the conference annotation operation.
In a second aspect, embodiments of the present disclosure provide an labeling apparatus, including: the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring operation position information of a conference marking operation, wherein the conference marking operation is performed by a first participant in a multimedia conference on a shared content interface; the display unit is used for acquiring the operation position information of the conference marking operation, wherein the conference marking operation is performed by a first participant in the multimedia conference on the shared content interface; and displaying a vector graph corresponding to the operation position information, wherein the vector graph is conference annotation content indicated by the conference annotation operation.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of labelling as described in the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of the labeling method as described in the first aspect.
According to the labeling method, the labeling device and the electronic equipment, the vector graphics are displayed on the basis of the operation position information of the conference labeling operation, and the displayed vector graphics can indicate conference labeling content. Therefore, when the conference mark content is amplified on the screen, the definition of the conference mark content is ensured, and the lossless amplification effect is realized.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of one embodiment of a labeling method according to the present disclosure;
FIG. 2 is a schematic structural view of one embodiment of an labeling device according to the present disclosure;
FIG. 3 is an exemplary system architecture in which the labeling method of one embodiment of the present disclosure may be applied;
fig. 4 is a schematic diagram of a basic structure of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Referring to FIG. 1, a flow of one embodiment of a labeling method according to the present disclosure is shown. The labeling method is applied to the terminal equipment. The labeling method as shown in fig. 1 comprises the following steps:
step 101, obtaining operation position information of a conference labeling operation.
In this embodiment, the execution subject (e.g., terminal device) of the labeling method may acquire the operation position information of the conference labeling operation.
In this embodiment, the conference labeling operation may be a labeling operation implemented by a participant of the multimedia conference, and the labeling operation may be implemented on a shared content interface.
In this embodiment, the multimedia conference may be an online conference performed by using a multimedia system. The multimedia conference includes at least one of the following: audio conferences and audio-video conferences. It is understood that an audio-video conference refers to the interaction of both audio and video during the conference. In some embodiments, the multimedia conference may be an audio video conference.
In this embodiment, the application for opening the multimedia conference may be an application through which the service end can provide the conference service of the multimedia conference, and the type of such an application may be various, which is not limited herein. For example, the application may be an instant video conference application, a communication application, a video playing application, a mail application, and the like.
In this embodiment, the participating users of the multimedia conference may participate in the multimedia conference. After the multimedia conference is started, there are typically at least two users involved. It will be appreciated that there may be a situation where only one user joins when the multimedia conference is open, which will typically end soon, because new user joins may make the participating user at least two, or where no new user joins, the user that has joined the conference may end the conference. Only one participating user has a generally short scene, and the requirement of sharing content with other participating users generally does not occur, so that the embodiment of the application can be generally applied to an application scene with at least two users in a multimedia conference.
In this embodiment, the user who performs the local login of the subject may be referred to as a first participant. It will be appreciated that the first of the first participant users is for ease of illustration and is not to be construed as limiting the order of labeling, or the order of meeting, etc.
In this embodiment, the shared content interface may be an interface for displaying shared content. The shared content may be content shared to at least a portion of the participating users participating in the multimedia conference. The specific form of the shared content may be varied and is not limited herein. As an example, the shared content may include at least one of, but is not limited to: sharing screen and sharing file. Shared files may include, but are not limited to: shared documents, shared images, etc.
In this embodiment, the first participant may perform the labeling operation on the shared content interface. In general, in a multimedia conference process, for a shared content displayed on a shared content interface, a participating user may refer a certain part of content to other users for viewing, and in this scenario, the participating user may perform a conference labeling operation.
In some application scenarios, the shared content interface may present an annotation toolbar that may present annotation tool items, e.g., the annotation tool items may be arrows, circles, etc. The user may select the annotation tool item and then operate in the shared content interface. By way of example, through the annotation operation, the user may indicate what other users are about, or may annotate himself to alert himself to certain content.
In some application scenarios, the conference labeling operation may be all or part of one-time labeling actions of the user. The one-time labeling action may refer to a process from a user starting a labeling operation to releasing the labeling operation. As an example, a user clicks a cross line control, then draws a cross line by touching the screen with a finger in the interface, and after drawing a section of the cross line, the finger leaves the screen; the finger starts touching the screen and the finger leaves the screen can be understood as a labeling action; the horizontal line drawn by the finger from the start of touching the screen to the point where the finger leaves the screen can be understood as a stroke of annotation.
For example, a user may take a long time for a label content, and in order to display the label content in real time, the label content may use a smaller data packet (smaller than a label content) collected in real time as a transmission unit along with the progress of a user operation, where in this case, the label operation indicated by the smaller data packet may be a part of a complete label behavior of the user.
Step 102, displaying the vector graphics corresponding to the operation position information.
In this embodiment, the execution body may display a vector graphic corresponding to the operation position information.
Here, the vector graphic may be conference annotation content indicated by a conference annotation operation.
In this embodiment, the vector graphics may be objects composed of straight lines and curved lines. In color filling, coloring treatment can be performed along the contour line edge of the curve. The color of the vector graphics is irrelevant to resolution, and when the graphics are scaled, the object can maintain the original definition and curvature, and neither the color nor the appearance is deviated or deformed.
In this embodiment, the vector graphics are generated based on the operation position information, and reference may be made to the technical principle of converting the bitmap into the vector graphics, which will not be described herein.
It should be noted that, in the labeling method provided in this embodiment, the vector graphics are displayed based on the operation position information of the conference labeling operation, and the displayed vector graphics can indicate the conference labeling content. Therefore, when the conference mark content is amplified on the screen, the definition of the conference mark content is ensured, and the lossless amplification effect is realized.
In some embodiments, the above method further comprises: based on the operation position information, a drawing parameter is generated.
Here, the execution subject may generate the drawing parameter based on the operation position information.
Here, the drawing parameter may indicate a vector graphic. In other words, the drawing parameter may be used to draw the conference marking content indicated by the conference marking operation.
Alternatively, the drawing parameter may be used to draw the annotation content locally on the execution body, or may be sent to other electronic devices by the execution body, and the conference annotation content may be drawn by the other electronic devices.
In this embodiment, the drawing parameters are generated based on the operation position information, and reference may be made to the technical principle of converting the bitmap into the vector diagram, which is not described herein.
In the labeling method provided by the embodiment, the parameter for drawing the indication vector graphics is generated based on the operation position information of the conference labeling operation, so that the drawing labeling content obtained by drawing the parameter for drawing is the vector graphics. Therefore, a more accurate vector graph can be generated; and as data indicating the vector graphics, the parameters for drawing are convenient to transmit. Therefore, when the screen of the conference mark content is enlarged, the definition of the conference mark content can be ensured; moreover, a data basis may be provided for transmitting meeting annotation content.
In some embodiments, the rendering parameters include graphical positioning information. Here, the graphical positioning information is used to indicate the location of the meeting annotation content in the shared content interface.
Here, the data indicating the vector graphics in the drawing parameter may be independent of the screen position. The drawing parameters may include data for locating the vector graphics on the shared content interface, i.e., graphics locating information.
In some embodiments, the above method may further comprise: and sending the drawing parameters including the graphic positioning information to a second participant.
Here, the second participant may be another participant in the multimedia conference than the first participant. Here, the second participant user may be a user having access rights to the shared content.
Here, the drawing parameters transmitted to the second participant user may include graphic positioning information. Thus, the position of the conference mark content displayed by the second participant user is consistent with the position of the conference mark content displayed by the first participant user. For example, the first participant's meeting annotation is for sentence A, and the second participant's displayed meeting annotation is also for sentence A. Therefore, the labeling consistency of labeling the shared content in the multimedia conference process can be realized, and the interaction accuracy of the multimedia conference is improved.
In some embodiments, the above method may further comprise: and respectively transmitting the drawing parameters and the shared content in the shared content interface to a second participant in response to the first participant being a sharing party of the shared content.
Here, the first participant user may or may not be a sharing party that shares the content. If the first participant is a party sharing the content, the first participant needs to send the shared content to the second participant so that the shared content can be presented in the client used by the second participant.
Here, the second participant user may display shared content, and the meeting annotation content may be displayed on the shared content according to the drawing parameter.
Here, the second participant may receive the drawing parameter and the shared content, respectively, and the shared content may include a video frame. It will be appreciated that the second participant user receives the rendering parameters and shared content separately and renders the shared content and meeting annotation content separately.
Here, the second participant user may refer to a client on which the second participant user logs in.
It should be noted that, in some related technologies, a bitmap image indicated by the operation position information (i.e., the label content) may be incorporated into a video frame of the shared content to generate a new composite video frame. The composite video frame is then transmitted to other participating users. The resolution of the composite video frame is fixed and can not be amplified in a lossless manner, so that the problem of unclear images in an amplified scene exists, and if the labeling content is combined into the video frame with shared content, the labeling content is also limited by the resolution of the video frame, so that the problem of unclear labeling content after the labeling content is enlarged is also caused.
In contrast, according to the labeling method provided by the embodiment of the application, the parameter for drawing and the shared content are transmitted separately, and the parameter for drawing indicates the vector graphics, so that conference labeling content drawn by a second participant user is not limited by the resolution of the transmitted shared content, the display of the conference labeling content can keep definition in an enlarged scene, and lossless amplification of the conference labeling content is realized.
In some embodiments, the drawing parameters may include keypoint location information.
In some embodiments, the generating drawing parameters based on the operation position information may include: fitting the operating position information to a Bezier curve, and generating key point position information indicative of the Bezier curve.
In some embodiments, the rendering parameters may include color data. The color data may indicate the color of the annotation content.
Herein, a bezier curve, also called a betz curve or a bezier curve, is a mathematical curve that can be applied to two-dimensional graphic drawing. The Bezier curve can be composed of a line segment and a node, the node is a draggable fulcrum, and the line segment can be intuitively understood as a telescopic rubber band.
Here, the operation position information is fitted to a bezier curve, that is, a curve indicated by the operation position information is a bezier curve, which corresponds to a shape in which the bezier curve is specified, and then the position of a key point indicating the shape is determined, whereby the position information of the key point indicating the bezier curve can be obtained.
By generating the key point position information indicating the bezier curve, the data amount can be reduced, and thus, the data transmission amount when the drawing information is transmitted can be reduced, and further, the data transmission efficiency can be improved, and the real-time performance of displaying the marked data in the multimedia conference can be improved.
In some embodiments, the executing entity may display the meeting annotation content according to the drawing parameters.
The definition can be ensured in the enlarged scene according to the conference marking content which is displayed by drawing parameters, and lossless enlargement can be realized.
In some embodiments, the meeting annotation is displayed by a displaying step, wherein the displaying step comprises: and if the participant equipment is provided with graphic acceleration hardware, processing the drawing parameters by adopting an open graphic library so as to display the conference annotation content.
Here, the participant device may include a terminal device used by the user to participate in the multimedia conference. It will be appreciated that the terminal devices participating in the multimedia conference may be configured differently.
Here, when the parameter for drawing is acquired and used, the participant device may employ different processing methods according to whether or not it is equipped with a graphics accelerator itself, so as to display the conference mark content.
Here, the above-described display step may be performed by the first participant user or may be performed by the second participant user.
The graphics acceleration hardware is generally in the form of a graphics acceleration card, and is an image adapter card that performs graphics operations specifically in a chip-integrated manner. Since the graphic accelerator card is faster in calculating graphics than the CUP, the electronic device equipped with the graphic accelerator card is excellent in image processing.
Here, an open graphics library (Open Graphics Library, openGL) may be used to render cross-language, cross-platform Application Programming Interfaces (APIs) for 2D, 3D vector graphics.
As an example, when drawing with an open graphics library, a background drawing thread (for drawing stored drawing parameters to glsurface view) converts the drawing parameters into vertex data of a plurality of triangles (for example, straight line is converted into two right-angled triangles, circle is converted into a plurality of isosceles triangles), then inputs the vertex data (including information of position, color, etc.) into an OpenGL drawing buffer, and finally calls an OpenGL API to draw all triangles.
Alternatively, if the participant device is not equipped with graphics acceleration hardware, canvas controls (Canvas) may be employed for rendering. With Canvas rendering, the background rendering thread may convert rendering parameters into rendering operations for the Canvas, such as drawLine, drawOval, etc., and then directly render the target graphic on the Canvas.
In some embodiments, the step 102 may include: generating a target labeling behavior unit according to the operation position information of the conference labeling operation; adding a target marking behavior unit into a marking behavior queue; and displaying the vector graphics corresponding to the operation position information based on the labeling behavior queue.
Here, the labeling behavior unit may characterize drawing parameters corresponding to the conference labeling operation from the start of the labeling action to the release of the labeling action.
Here, the labeling behavior queue may be generated from locally generated labeling behavior units and received time stamps of the labeling behavior units.
Here, if the user touches the screen to start drawing (i.e., starts a labeling action), the execution subject may detect a touch event and then generate a series of original operation points according to the touch time. As the labeling action proceeds, a portion of the raw data generated by the labeling action may be converted into drawing parameters in real time, and then drawn locally based on the drawing parameters of the labeling action (labeled with an action tag to indicate which conference labeling unit belongs to) and sent to a second participant for display.
Here, the sustained annotation action may be categorized as a unified conference annotation unit before the user stops touching the screen (i.e., releases the annotation action). The conference marking unit may be visually understood as a pen, for example, after the user selects an arrow tool item, an action performed from the start of touching the screen to the stop of touching the screen may be regarded as the user drawing an arrow, and a pen marking is completed. The pen of annotation can be used as a conference annotation unit. The drawing parameters corresponding to the conference labeling unit can be understood as labeling behavior units (i.e. data sets).
Here, the marking action units in the marking action unit queue are ordered according to the time stamp sequence.
Alternatively, the timestamp may indicate the time when the labeling action started, or may indicate the time when the labeling action ended.
Alternatively, the rank of each labeling action unit in the labeling action unit queue may be determined by the time when the user touches the screen to start drawing (i.e., starts a labeling action).
Alternatively, the drawing parameters in the labeling action units may be time-stamped, and then the rank of each labeling action unit in the labeling action unit queue may be determined according to the time-stamped.
Here, the locally generated annotation behavior unit and the annotation behavior unit transmitted from the second participant can indicate the annotation behavior of each participant in the multimedia conference. The labeling behavior units formed according to the time sequence can orderly indicate each labeling behavior in the multimedia conference.
The marking behavior unit queue is determined, so that a user can conveniently change marking contents by taking the marking behavior unit as a unit.
In some embodiments, the method further comprises: and in response to detecting the predefined annotation withdraw operation, deleting the annotation content indicated by the annotation behavior unit positioned at the tail of the queue in the annotation behavior unit queue on the shared content interface.
Here, the annotation pullback tool item may be provided in an annotation toolbar. Then, in response to the user operation on the annotation withdraw tool item, deleting the conference annotation content, namely withdrawing the annotation content. The refund of the annotation content can be understood as stopping the display of the annotation content.
Here, the retraction may be to retract the last annotation content, i.e. the annotation content indicated by the annotation action unit located at the end of the queue in the annotation action unit queue.
The method and the system have the advantages that the marking content is managed according to the marking behavior unit queue, a user can withdraw marking behaviors according to a marking unit, the user can conveniently change the made marking, and interaction efficiency in the multimedia conference process is improved.
In some embodiments, the method further comprises: and sending a withdrawal operation request generated based on the withdrawal operation to a second participant, and deleting (also called withdrawing) the annotation content indicated by the withdrawal operation request by the second participant.
Here, the first participant may perform a annotation retraction operation, and then the execution body may generate an annotation retraction request according to the annotation retraction operation. Then, the executing body may send the annotation retraction request to the second participant, and the second participant may retract the annotation corresponding to the annotation retraction request.
Optionally, the annotation withdrawal request may include the annotation to be withdrawn indication information. The to-be-withdrawn marking indication information can be used for indicating to-be-withdrawn marking content. The second participant user receives the annotation withdraw request and can withdraw the annotation content indicated by the indication information to be annotated in the annotation withdraw request.
Optionally, the annotation retraction request is retracted from the tail of the queue according to a predetermined convention. And when the second participant user receives the annotation withdrawal request once, the annotation content indicated by the annotation behavior unit at the tail of the queue in the annotation behavior unit queue can be withdrawn.
By retracting the operation request, the display of the labeling content can be synchronized among the participants in time, so that the same labeling content among the participants is ensured to be displayed as much as possible.
In some embodiments, the generating the drawing parameter based on the operation position information may include: and generating drawing parameters according to the reference resolution of the multimedia conference. In other words, the resolution on which the rendering parameters are based is a reference resolution of the multimedia conference.
Here, the reference resolution of the multimedia conference may be pre-specified. The specified rule may be set according to actual situations, and is not limited herein.
As an example, the reference resolution may be 1920 x 1080, or 3840 x 2160, or the like.
In some application scenarios, if the screen resolution of the execution subject may be a pre-specified reference resolution, the drawing parameter at the reference resolution may be obtained based on conversion of the normalized drawing parameter to the reference resolution.
It should be noted that, the parameter for drawing is based on the reference resolution, so that when the parameter for drawing is transmitted to the second participant, the second participant can conveniently and pre-determine the conversion mode from the conversion of the reference resolution to the display of the resolution, and therefore, the second participant can quickly complete the conversion after receiving the parameter for drawing, thereby realizing the quick display.
In some embodiments, the step 101 may include: and converting the original operation position information of the conference marking operation into the operation position information under the reference resolution according to the local end resolution and the reference resolution.
In some application scenarios, the execution body may perform conversion from the original operation position information to the reference resolution by normalizing, and then determine the drawing parameter at the reference resolution according to the operation position information obtained by the conversion.
In some embodiments, the reference resolution is a screen resolution of the sharing initiating user.
In some embodiments, the method further comprises: and responding to the sharing initiation user to start sharing, and transmitting the reference resolution to a participant user of the multimedia conference.
Here, the screen resolution of the sharing initiator is taken as the reference resolution, so that the conversion time of the sharing initiator from other resolutions to the reference resolution can be saved, and the speed of sending out the parameters for drawing can be increased. In general, a sharing initiating user can be used as a main sharer of a multimedia conference, and the main sharer can usually carry out more explanation and annotation, namely more annotation content is sent from a sharing initiating user terminal; therefore, conversion from the sharing initiating user terminal to the reference resolution is avoided, the calculated amount and the communication amount of the sharing initiating user terminal can be reduced, the smooth proceeding of the labeling content is ensured, and the data indicating the labeling content is sent to other participant terminals as soon as possible.
In some embodiments, the reference resolution is determined from a screen resolution of a participant in a personal computer-based participant device.
In some application scenarios, the annotation layer size of a personal computer (Persona lComputer, PC) is inconsistent with the annotation layer size of the mobile terminal operating system. If there is a PC as the reference device, the screen resolution of the PC can be taken as the reference resolution.
As an example, in response to detecting that a multimedia conference has a PC as the participant device, the resolution of the PC (e.g., 1920 x 1080) may be communicated to the respective participant objects, with the rendering parameters transmitted between the respective participant objects being location information at this size. For example, the resolution of the annotation layer under the mobile device A system is 3840 x 2160, then the operational position information can be converted to coordinates based on 1920 x 1080 resolution, and the rendering parameters are also based on 1920 x 1080 coordinates. When the mobile device with the A operating system is drawn, all data under 1920 ANG 1080 are required to be converted into resolution coordinates under the A operating system of the mobile device, and then the data are drawn.
It should be noted that, the screen resolution of the PC side is used as the reference resolution, so that the probability that the multimedia conference needs to be converted can be reduced. In other words, if there is no PC as a participant device, the multimedia conference does not require conversion for screen resolution. In some application scenes, the situation that the PC is used as a participant is probably less, and the calculation rate for converting the screen resolution can be saved with high probability, so that the calculation amount of the multimedia conference can be reduced as much as possible, and the smooth proceeding of the multimedia conference is ensured.
In some embodiments, the step of determining the reference resolution may include: determining the earliest participating object of the participating time from the participating objects taking the personal computer as the participating equipment; the determined screen resolution of the reference object is determined as the reference resolution.
Alternatively, a participant using a personal computer as a participant device may be determined from existing participant objects of the conference; from these participant objects, the participant object with the earliest participant time is then determined. The reference object with the earliest reference time (using the personal computer as the reference device) can be used as the reference object for determining the reference resolution.
Or, a participant queue using a personal computer as participant equipment can be set in the process of the multimedia conference, and the participant queue is generated according to the participant time. If the reference object is withdrawn, deleting the withdrawn reference object from the queue; if the queue is added after the queue is exited, the queue is added to the tail. Thus, the team head participant can be guaranteed to be the earliest participant in the multimedia conference with the personal computer as the participant.
The reference object with the earliest reference time (using the personal computer as the reference device) may be used as the reference object for determining the reference resolution, and the screen resolution of the PC may be used as the reference resolution as much as possible, regardless of the change of the reference object, in the case where the reference object using the personal computer as the reference device is provided in the multimedia conference. Therefore, the calculated amount of the multimedia conference can be reduced as much as possible, and smooth proceeding of the multimedia conference is ensured.
In some embodiments, the method further comprises: displaying at least one annotation graph control, wherein each annotation graph control in the at least one annotation graph control has a one-to-one correspondence with each graph generation method; and in response to detecting the triggering operation for the labeling graph control, invoking a graph generating method corresponding to the triggered labeling graph control.
As an example, the annotation graphic control can be used to draw annotation content for various annotation styles. For example, the exposed annotation graphical controls may include, but are not limited to, implementing at least one of: drawing arrows, drawing circles, drawing straight lines, drawing rectangles, etc.
Here, the graphic generation method may be used to draw a specific graphic. For example, triggering an arrow graphic control may invoke a method of drawing an arrow, drawing an arrow according to a start position and an end position of a user operation; this arrow may be used as annotation content.
It should be noted that, the one-to-one correspondence relation is set between the labeling graphic control and the image generation method, so that when the graphic control is newly added, the corresponding image generation method is newly added and the correspondence relation is established, and the variety of the labeling graphic can be rapidly expanded. Therefore, the workload of expanding the annotation graph can be reduced, and the difficulty of expanding the annotation graph can be reduced.
In some application scenarios, the step of processing the operation position information to obtain parameters for drawing, the step of transmitting the parameters for drawing to the second participant, the step of determining a labeling action unit queue, and the like may be packaged into a Software function package (Software DevelopmentKit, SDK). Therefore, the functions of the labeling process can be relatively separated from the client side for performing the multimedia conference, the labeling process can be conveniently and rapidly performed, and the instantaneity of the labeling process and the display is ensured.
With further reference to fig. 2, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a labeling apparatus, where an embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus is particularly applicable to various electronic devices.
As shown in fig. 2, the labeling device of the present embodiment includes: an acquisition unit 201 and a presentation unit 202. The system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring operation position information of a conference marking operation, wherein the conference marking operation is performed by a first participant in a multimedia conference on a shared content interface; and the display unit is used for displaying the vector graph corresponding to the operation position information, wherein the vector graph is conference annotation content indicated by the conference annotation operation.
In this embodiment, the specific processing of the acquiring unit 201 and the displaying unit 202 of the labeling device and the technical effects thereof may refer to the description related to the steps 101 and 102 in the corresponding embodiment of fig. 1, and are not repeated herein.
In some embodiments, the apparatus is further to: and generating a drawing parameter based on the operation position information, wherein the drawing parameter indicates the vector graphics.
In some embodiments, the drawing parameters include graphical positioning information, wherein the graphical positioning information is used to indicate a location of the meeting annotation in the shared content interface; the apparatus is further for: and sending the drawing parameters comprising the graphic positioning information to a second participant user, wherein the second participant user can be a user with access rights for sharing content in the multimedia conference.
In some embodiments, sending the drawing parameters including graphical positioning information to a second participant user includes: and in response to determining that the first participant user is a sharing party of the shared content, respectively sending drawing parameters and the shared content in the shared content interface to a second participant object, wherein the second participant object displays the shared content, and displaying the meeting annotation content on the shared content according to the drawing parameters.
In some embodiments, the drawing parameters include keypoint location information; and generating drawing parameters based on the operation position information, including: fitting the operating position information to a Bezier curve, and generating key point position information indicative of the Bezier curve.
In some embodiments, the apparatus is further to: and displaying the conference annotation content according to the drawing parameters.
In some embodiments, the meeting annotation is displayed by a displaying step, wherein the displaying step comprises: and if the participant equipment is provided with graphic acceleration hardware, processing the drawing parameters by adopting an open graphic library so as to display the conference annotation content.
In some embodiments, the displaying the vector graphics corresponding to the operation position information includes: generating a target labeling action unit according to the operation position information of the conference labeling operation, wherein the labeling action unit is a drawing parameter corresponding to the conference labeling operation from the starting of the labeling action to the releasing of the labeling action; adding a target marking behavior unit into a marking behavior queue, wherein the marking behavior unit queue is generated according to the marking behavior unit generated locally and the received time stamp of the marking behavior unit; and displaying the vector graphics corresponding to the operation position information based on the labeling behavior queue.
In some embodiments, the apparatus is further to: and in response to detecting the predefined annotation withdraw operation, deleting the annotation content indicated by the annotation behavior unit positioned at the tail of the queue in the annotation behavior unit queue on the shared content interface.
In some embodiments, the apparatus is further to: and sending a withdrawal operation request generated based on the withdrawal operation to a second participant user so as to enable the second participant user to delete the annotation content indicated by the withdrawal operation request.
In some embodiments, the generating drawing parameters based on the operation position information includes: and generating drawing parameters according to the reference resolution of the multimedia conference.
In some embodiments, the reference resolution is a screen resolution of the sharing initiating user; the apparatus is further for: and responding to the sharing initiation user to start sharing, and transmitting the reference resolution to a participant user of the multimedia conference.
In some embodiments, the reference resolution is determined from a screen resolution of a participant in a personal computer-based participant device.
In some embodiments, the step of determining the reference resolution comprises: determining the earliest participating object of the participating time from the participating objects taking the personal computer as the participating equipment; and determining the screen resolution of the determined reference object as the reference resolution.
In some embodiments, the apparatus is further to: displaying at least one annotation graph control, wherein each annotation graph control in the at least one annotation graph control has a one-to-one correspondence with each graph generation method; and in response to detecting the triggering operation for the labeling graph control, invoking a graph generating method corresponding to the triggered labeling graph control.
Referring to fig. 3, fig. 3 illustrates an exemplary system architecture in which the labeling method of one embodiment of the present disclosure may be applied.
As shown in fig. 3, the system architecture may include terminal devices 301, 302, 303, a network 304, and a server 305. The network 304 is used as a medium to provide communication links between the terminal devices 301, 302, 303 and the server 305. The network 304 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 301, 302, 303 may interact with the server 305 through the network 304 to receive or send messages or the like. Various client applications, such as a web browser application, a search class application, a news information class application, may be installed on the terminal devices 301, 302, 303. The client application in the terminal device 301, 302, 303 may receive the instruction of the user and perform the corresponding function according to the instruction of the user, for example adding the corresponding information in the information according to the instruction of the user.
The terminal devices 301, 302, 303 may be hardware or software. When the terminal devices 301, 302, 303 are hardware, they may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like. When the terminal devices 301, 302, 303 are software, they can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., software or software modules for providing distributed services) or as a single software or software module. The present invention is not particularly limited herein.
The server 305 may be a server providing various services, for example, receiving information acquisition requests sent by the terminal devices 301, 302, 303, and acquiring presentation information corresponding to the information acquisition requests in various ways according to the information acquisition requests. And the relevant data showing the information is sent to the terminal devices 301, 302, 303.
It should be noted that, the labeling method provided by the embodiment of the present disclosure may be performed by the terminal device, and accordingly, the labeling apparatus may be set in the terminal devices 301, 302, and 303. In addition, the labeling method provided in the embodiment of the present disclosure may also be executed by the server 305, and accordingly, the labeling device may be disposed in the server 305.
It should be understood that the number of terminal devices, networks and servers in fig. 3 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 4, a schematic diagram of a configuration of an electronic device (e.g., a terminal device or server in fig. 3) suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device may include a processing means (e.g., a central processor, a graphics processor, etc.) 401, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperTextTransfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring operation position information of a conference marking operation, wherein the conference marking operation is performed by a first participant in a multimedia conference on a shared content interface; and generating drawing parameters based on the operation position information, wherein the drawing parameters are used for indicating vector graphics, and the drawing parameters are used for drawing conference annotation contents indicated by the conference annotation operation.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The name of the unit is not limited to the unit itself in some cases, and for example, the acquisition unit may also be described as "a unit that acquires operation position information".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (17)

1. A method of labeling, comprising:
acquiring operation position information of a conference marking operation, wherein the conference marking operation is performed by a first participant in a multimedia conference on a shared content interface;
displaying a vector graph corresponding to the operation position information, wherein the vector graph is conference annotation content indicated by the conference annotation operation;
generating a drawing parameter based on the operation position information, wherein the drawing parameter indicates the vector graphics;
in response to determining that the first participant user is a sharing party of shared content, sending the drawing parameters and the shared content in the shared content interface to a second participant user respectively, so that the second participant user receives the drawing parameters and the shared content separately and renders the shared content and the conference mark content separately; wherein the shared content includes a shared screen;
Wherein the drawing parameter includes key point position information; and
the generating drawing parameters based on the operation position information includes:
performing curve fitting on the operation position information, and generating key point position information indicating the curve.
2. The method of claim 1, wherein the drawing parameters include graphical positioning information, wherein the graphical positioning information is used to indicate a location of the meeting annotation content in a shared content interface; and
the method further comprises the steps of:
and sending the drawing parameters comprising the graphic positioning information to the second participant, wherein the second participant is a user with access rights of shared content in the multimedia conference.
3. The method of claim 1, wherein the second participant displays shared content and the meeting annotation content is displayed on the shared content according to the rendering parameters.
4. The method of claim 1, wherein curve fitting the operational position information and generating keypoint position information indicative of the curve comprises:
Fitting the operating position information to a Bezier curve, and generating key point position information indicative of the Bezier curve.
5. The method according to claim 1, wherein the method further comprises:
and displaying the conference annotation content according to the drawing parameters.
6. The method of claim 1, wherein the meeting annotation is displayed by the displaying step, wherein the displaying step comprises:
and if the participant equipment is provided with graphic acceleration hardware, processing the drawing parameters by adopting an open graphic library so as to display the conference annotation content.
7. The method according to claim 1, wherein the presenting the vector graphics corresponding to the operation position information includes:
generating a target labeling action unit according to the operation position information of the conference labeling operation, wherein the labeling action unit is a drawing parameter corresponding to the conference labeling operation from the starting of the labeling action to the releasing of the labeling action;
adding a target marking behavior unit into a marking behavior queue, wherein the marking behavior unit queue is generated according to the marking behavior unit generated locally and the received time stamp of the marking behavior unit;
And displaying the vector graphics corresponding to the operation position information based on the labeling behavior queue.
8. The method of claim 7, wherein the method further comprises:
and in response to detecting the predefined annotation withdraw operation, deleting the annotation content indicated by the annotation behavior unit positioned at the tail of the queue in the annotation behavior unit queue on the shared content interface.
9. The method of claim 8, wherein the method further comprises:
and sending a withdrawal operation request generated based on the annotation withdrawal operation to the second participant user so that the second participant user deletes the annotation content indicated by the withdrawal operation request.
10. The method according to claim 1, wherein the generating drawing parameters based on the operation position information includes:
and generating the drawing parameters according to the reference resolution of the multimedia conference.
11. The method of claim 10, wherein the reference resolution is a screen resolution of a sharing originating user; and
the method further comprises the steps of:
and responding to the sharing initiation user to start sharing, and transmitting the reference resolution to a participant user of the multimedia conference.
12. The method of claim 10, wherein the reference resolution is determined from a screen resolution of a participant in a personal computer-based participant device.
13. The method of claim 12, wherein the step of determining the reference resolution comprises:
determining the earliest participating object of the participating time from the participating objects taking the personal computer as the participating equipment;
and determining the screen resolution of the determined reference object as the reference resolution.
14. The method according to claim 1, wherein the method further comprises:
displaying at least one annotation graph control, wherein each annotation graph control in the at least one annotation graph control has a one-to-one correspondence with each graph generation method;
and in response to detecting the triggering operation for the labeling graph control, invoking a graph generating method corresponding to the triggered labeling graph control.
15. An labeling device, comprising:
the system comprises an acquisition unit, a storage unit and a processing unit, wherein the acquisition unit is used for acquiring operation position information of a conference marking operation, wherein the conference marking operation is performed by a first participant in a multimedia conference on a shared content interface;
The display unit is used for displaying a vector graph corresponding to the operation position information, wherein the vector graph is conference annotation content indicated by the conference annotation operation;
the labeling device is also used for: generating a drawing parameter based on the operation position information, wherein the drawing parameter indicates the vector graphics;
in response to determining that the first participant user is a sharing party of shared content, sending the drawing parameters and the shared content in the shared content interface to a second participant user respectively, so that the second participant user receives the drawing parameters and the shared content separately and renders the shared content and the conference mark content separately; wherein the shared content includes a shared screen;
wherein the drawing parameter includes key point position information; and
the generating drawing parameters based on the operation position information includes:
performing curve fitting on the operation position information, and generating key point position information indicating the curve.
16. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
When executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-14.
17. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-14.
CN202011585513.5A 2020-12-28 2020-12-28 Labeling method and device and electronic equipment Active CN112698759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011585513.5A CN112698759B (en) 2020-12-28 2020-12-28 Labeling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011585513.5A CN112698759B (en) 2020-12-28 2020-12-28 Labeling method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112698759A CN112698759A (en) 2021-04-23
CN112698759B true CN112698759B (en) 2023-04-21

Family

ID=75511350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011585513.5A Active CN112698759B (en) 2020-12-28 2020-12-28 Labeling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112698759B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489910A (en) * 2022-02-10 2022-05-13 北京字跳网络技术有限公司 Video conference data display method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110727361A (en) * 2019-09-30 2020-01-24 厦门亿联网络技术股份有限公司 Information interaction method, interaction system and application
CN111427528A (en) * 2020-03-20 2020-07-17 北京字节跳动网络技术有限公司 Display method and device and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6343313B1 (en) * 1996-03-26 2002-01-29 Pixion, Inc. Computer conferencing system with real-time multipoint, multi-speed, multi-stream scalability
CN101370115A (en) * 2008-10-20 2009-02-18 深圳华为通信技术有限公司 Conference terminal, conference server, conference system and data processing method
JP6163863B2 (en) * 2013-05-17 2017-07-19 株式会社リコー Information processing apparatus, program, information processing system, and information display method
CN109348161B (en) * 2018-09-21 2021-05-18 联想(北京)有限公司 Method for displaying annotation information and electronic equipment
CN111459438A (en) * 2020-04-07 2020-07-28 苗圣全 System, method, terminal and server for synchronizing drawing content with multiple terminals

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110727361A (en) * 2019-09-30 2020-01-24 厦门亿联网络技术股份有限公司 Information interaction method, interaction system and application
CN111427528A (en) * 2020-03-20 2020-07-17 北京字节跳动网络技术有限公司 Display method and device and electronic equipment

Also Published As

Publication number Publication date
CN112698759A (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN110162670B (en) Method and device for generating expression package
CN111756917B (en) Information interaction method, electronic device and computer readable medium
EP4262214A1 (en) Screen projection method and apparatus, and electronic device and storage medium
US20180376104A1 (en) Method and apparatus for sharing information during video call
WO2021012952A1 (en) Message processing method, device and electronic equipment
CN110059623B (en) Method and apparatus for generating information
EP4148597A1 (en) Search result display method and apparatus, readable medium, and electronic device
US10416783B2 (en) Causing specific location of an object provided to a device
CN113591439B (en) Information interaction method and device, electronic equipment and storage medium
WO2022206127A1 (en) Attitude calibration method and apparatus, storage medium, and electronic device
CN110096665B (en) Method, device, equipment and medium for displaying picture comment data
CN110213614A (en) The method and apparatus of key frame are extracted from video file
KR20150048029A (en) Method and system for sharing display attributes associated with content
US20230239546A1 (en) Theme video generation method and apparatus, electronic device, and readable storage medium
CN112698759B (en) Labeling method and device and electronic equipment
CN111127469A (en) Thumbnail display method, device, storage medium and terminal
CN112256221A (en) Information display method and device and electronic equipment
WO2023083085A1 (en) Document sharing method and apparatus, device and medium
CN113191257B (en) Order of strokes detection method and device and electronic equipment
CN111597414B (en) Display method and device and electronic equipment
CN111586295B (en) Image generation method and device and electronic equipment
CN115209215A (en) Video processing method, device and equipment
CN111696214A (en) House display method and device and electronic equipment
CN112307394A (en) Information display method and device and electronic equipment
CN113837918A (en) Method and device for realizing rendering isolation by multiple processes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant