CN114501051B - Method and device for displaying marks of live objects, storage medium and electronic equipment - Google Patents

Method and device for displaying marks of live objects, storage medium and electronic equipment Download PDF

Info

Publication number
CN114501051B
CN114501051B CN202210082189.8A CN202210082189A CN114501051B CN 114501051 B CN114501051 B CN 114501051B CN 202210082189 A CN202210082189 A CN 202210082189A CN 114501051 B CN114501051 B CN 114501051B
Authority
CN
China
Prior art keywords
target
live
picture
contour
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210082189.8A
Other languages
Chinese (zh)
Other versions
CN114501051A (en
Inventor
陈文琼
谢欢
曾冠东
谢导
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fanxing Huyu IT Co Ltd
Original Assignee
Guangzhou Fanxing Huyu IT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fanxing Huyu IT Co Ltd filed Critical Guangzhou Fanxing Huyu IT Co Ltd
Priority to CN202210082189.8A priority Critical patent/CN114501051B/en
Publication of CN114501051A publication Critical patent/CN114501051A/en
Application granted granted Critical
Publication of CN114501051B publication Critical patent/CN114501051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs

Abstract

The invention discloses a method and a device for displaying a mark of a live object, a storage medium and electronic equipment. Wherein the method comprises the following steps: displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects; and responding to a triggering operation executed on a target position in the first live broadcast picture, and marking a target live broadcast object in a second live broadcast picture, wherein the second live broadcast picture is a live broadcast picture positioned behind the first live broadcast picture in the live broadcast data stream, and the target live broadcast object is a live broadcast object which is determined from at least two live broadcast objects and is matched with the target position. The invention solves the technical problem that the realization of focusing on a specific object in a complex scene is difficult.

Description

Method and device for displaying marks of live objects, storage medium and electronic equipment
Technical Field
The present invention relates to the field of computers, and in particular, to a method and apparatus for displaying a marker of a live object, a storage medium, and an electronic device.
Background
In general, in a live scene of multiple persons, such as a scene of multiple persons dancing, singing, explaining, etc., if a viewer at a viewing end is interested in a certain anchor in the live scene, the viewer can only follow the anchor movement through a line of sight, and find the anchor from multiple anchors in real time, so as to realize the attention of the anchor. Under the condition that the number of people in the live broadcast scene is large, finding the anchor is difficult, and the persistent attention is difficult to realize in a sight following mode.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for displaying a mark of a live object, a storage medium and electronic equipment, which are used for at least solving the technical problem that the realization of focusing on a specific object in a complex scene is difficult.
According to an aspect of the embodiment of the present invention, there is provided a marker display method for a live object, including: displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects; and responding to a triggering operation executed on a target position in the first live broadcasting picture, and marking a target live broadcasting object in a second live broadcasting picture, wherein the second live broadcasting picture is a live broadcasting picture positioned behind the first live broadcasting picture in a live broadcasting data stream, and the target live broadcasting object is a live broadcasting object which is determined from the at least two live broadcasting objects and is matched with the target position.
According to another aspect of the embodiment of the present invention, there is also provided a marker display apparatus for a live object, including: the first display unit is used for displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects; and a second display unit, configured to respond to a trigger operation performed on a target position in the first live broadcast frame, and mark a target live broadcast object in a second live broadcast frame, where the second live broadcast frame is a live broadcast frame located after the first live broadcast frame in a live broadcast data stream, and the target live broadcast object is a live broadcast object that is determined from the at least two live broadcast objects and matches the target position.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the marker display method of a live object described above when running.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device including a memory in which a computer program is stored, and a processor configured to execute the marker display method of a live object described above by the computer program.
In the embodiment of the invention, the triggering operation is adopted to respond to the triggering operation executed on the target position in the first live broadcast picture which is played and comprises at least two live broadcast objects in the same environment, the target live broadcast objects which are determined from the at least two live broadcast objects and are matched with the target position are marked in the second live broadcast picture which is positioned behind the first live broadcast picture in the live broadcast data stream, the target live broadcast objects which are matched with the target position are marked and displayed in the second live broadcast picture which is positioned behind the first live broadcast picture, the purpose of marking and displaying the specific objects in the complex scene which comprises the at least two live broadcast objects is achieved, the technical effect that the specific objects in the complex scene are focused through the marked and the technical problem that the specific objects are focused and are difficult to realize in the complex scene is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a schematic illustration of an application environment of an alternative method of marker display of live objects in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of an alternative method of marker display for live objects according to an embodiment of the invention;
FIG. 3 is a flow chart of an alternative method of marker display for live objects according to an embodiment of the invention;
FIG. 4 is a floating window display schematic diagram of an alternative method of marker display for live objects according to an embodiment of the present invention;
FIG. 5 is a flow chart of an alternative method of marker display for live objects in accordance with an embodiment of the present invention;
FIG. 6 is a schematic illustration of a marker display method for an alternative live object according to an embodiment of the present invention;
FIG. 7 is a flow chart of an alternative method of marker display for live objects in accordance with an embodiment of the present invention;
fig. 8 is a schematic structural view of an alternative marker display device for live objects according to an embodiment of the present invention;
Fig. 9 is a schematic structural view of an alternative electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiment of the present invention, there is provided a method for displaying a marker of a live object, and optionally, the method for displaying a marker of a live object may be applied to an environment as shown in fig. 1, but is not limited to the method. The terminal device 102 performs data interaction with the server 112 through the network 110, which is not limited to that the terminal device 102 sends a live broadcast viewing request to the server 112 through the network 110, and the server 112 feeds back the live broadcast data stream requested to be viewed by the terminal device 102 to the terminal device 102 through the network 110 when receiving the live broadcast viewing request.
The tag display method in which the terminal device 102 realizes the live object based on the live data stream pushed by the server 112 is not limited to being realized by sequentially executing S102 to S104. S102, displaying a first direct broadcast picture. And displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects. S104, marking the target live object in the second live image. And responding to a triggering operation executed on a target position in the first live broadcasting picture, marking a target live broadcasting object in a second live broadcasting picture, wherein the second live broadcasting picture is a live broadcasting picture positioned behind the first live broadcasting picture in the live broadcasting data stream, and the target live broadcasting object is a live broadcasting object which is determined from at least two live broadcasting objects and is matched with the target position.
Alternatively, in the present embodiment, the terminal device 102 may be a terminal device configured with a target client, and may include, but is not limited to, at least one of the following: cell phones (e.g., android cell phones, IOS cell phones, etc.), notebook computers, tablet computers, palm computers, MIDs (Mobile Internet Devices ), PADs, desktop computers, smart televisions, etc. The target client is a client for viewing live pictures, and is not limited to an audio client, a video client, an instant messaging client, a browser client, an educational client, and the like. The network 110 may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: local area networks, metropolitan area networks, and wide area networks, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communications. The server 112 may be a single server, a server cluster including a plurality of servers, or a cloud server. The above is merely an example, and is not limited in any way in the present embodiment.
As an optional embodiment, as shown in fig. 2, the method for displaying a marker of the live object includes:
s202, displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects;
S204, responding to a triggering operation executed on a target position in a first live broadcast picture, and marking a target live broadcast object in a second live broadcast picture, wherein the second live broadcast picture is a live broadcast picture positioned behind the first live broadcast picture in a live broadcast data stream, and the target live broadcast object is a live broadcast object which is determined from at least two live broadcast objects and is matched with the target position.
The live data stream is not limited to a live video data stream composed of multiple frames of live pictures, and each live picture included in the live data stream is not limited to at least two live objects in the same live environment. The live view is not limited to being displayed by the client of the terminal device. The display of the live image is not limited to the initiation of a live display request to the server by the client for requesting the live image of the live data stream to be displayed in the client, so as to receive the live data stream returned by the server for displaying the live image in the client.
The first direct-broadcast picture is a direct-broadcast picture in the client that receives the trigger operation, and the second direct-broadcast picture is a direct-broadcast picture that is temporally subsequent to the first direct-broadcast picture in the direct-broadcast data stream. And marking the target live broadcast object on the live broadcast picture after the first direct broadcast picture under the condition that the first direct broadcast picture is used for marking and triggering the target live broadcast corresponding to the target position of the first direct broadcast picture.
The method comprises the steps of triggering a target position in a first direct broadcast picture to display a target direct broadcast object, and determining the target direct broadcast object from at least two direct broadcast objects in the direct broadcast picture according to the target position. The live object currently located at the target position is not limited to being determined as the target live object. The display area currently located at the target location is not limited to the target location where the live object is located.
Marking the target live object in the live image is not limited to marking the target live object in the live image for display, so that the position or area where the target live object is located in the live image is highlighted by marking. Before the triggering operation of the marker display is not performed, for example, the first direct broadcast picture, all the direct broadcast objects are displayed in the original display mode. Marking the target live object in the second live frames after the first live frame is not limited to adjusting the marked position in each live frame along with the position change of the target live object in the live frames, so that the target live object is marked and displayed in each second live frame after the first live frame.
In the embodiment of the application, the triggering operation is performed in response to the target position in the first live broadcast picture which is being played and comprises at least two live broadcast objects in the same environment, the target live broadcast objects which are determined from the at least two live broadcast objects and are matched with the target position are marked in the second live broadcast picture which is positioned behind the first live broadcast picture in the live broadcast data stream, the target position of the first live broadcast picture of the complex scene which comprises at least two live broadcast objects is triggered, so that the target live broadcast objects which are matched with the target position are marked and displayed in the second live broadcast picture which is positioned behind the first live broadcast picture, the purpose of marking and displaying the specific objects in the complex scene which comprises at least two live broadcast objects is achieved, the technical effect that the specific objects in the complex scene are focused through the marked and the technical problem that the specific objects are focused and are difficult to realize in the complex scene is solved.
Optionally, in response to a triggering operation performed on a target position in the first direct broadcast picture, the target direct broadcast object is marked in the second direct broadcast picture, and the target direct broadcast object is not limited to the target position in the first direct broadcast picture triggering the marked display of the target direct broadcast object in the direct broadcast picture, and the target direct broadcast object is not limited to the direct broadcast object determined from at least two direct broadcast objects before the first direct broadcast picture is displayed, and the target direct broadcast object is triggered to the marked display in the direct broadcast picture through the triggering operation of the target position.
Under the condition that more than one target live object is determined from at least two live objects, the method is not limited to presetting the triggering position corresponding to each target live object. And determining a target live broadcast object corresponding to the target position by comparing the target position with a plurality of preset trigger positions, so as to mark the target live broadcast object in the second live broadcast picture.
Optionally, in response to the triggering operation performed on the target position in the first direct broadcast picture, the target direct broadcast object is marked in the second direct broadcast picture, and the object selection floating window is displayed, without being limited to the triggering operation in response to the target position in the first direct broadcast picture, with at least two object identifiers respectively corresponding to the direct broadcast objects being displayed in the object selection floating window. And determining the target live broadcast object marked in the second live broadcast picture through a confirmation operation executed on the target object mark.
As an alternative embodiment, in response to a trigger operation performed on a target position in the first live view, marking the target live object in the second live view includes:
s1, responding to a triggering operation executed on a target position in a first direct broadcast picture, displaying a target mark floating window, wherein the target mark floating window is used for determining a mark parameter for marking a target direct broadcast object;
And S2, in response to the confirmation operation of the target marking parameters in the target marking floating window, marking the target live broadcast object in the second live broadcast picture according to the target marking parameters.
And in response to a triggering operation performed on the target position in the first direct broadcast picture, displaying a target mark floating window on the direct broadcast picture displayed by the client, and determining and adjusting the mark parameters of the target direct broadcast object through the target mark floating window. The marking parameters are not limited to include the marking pattern and parameters corresponding to the marking pattern. The marking mode is used for indicating the marking area range and the highlighted parameters for marking the target live broadcast object. The marking means is not limited to include at least one of: and (5) contour marking and region marking. The contour mark is not limited to highlighting the object contour of the target live object, so as to determine the position of the target live object in the live image and the form of the target live object. The region marking is not limited to highlighting the region where the target live object is located, so that the region where the target live object is located is different from other live objects, and the target live object is highlighted in the plurality of live objects.
The method is not limited to performing marking display on a plurality of target live broadcast objects through multiple triggering operations, and marking modes of different target live broadcast objects can be the same or different, and the marking modes are not limited to determining through target marking floating windows corresponding to the target live broadcast objects. And when the target live object is marked and displayed in the live image, the target marking floating window is displayed by triggering the adjustment operation, and the marking mode of the target live object is adjusted by adjusting the marking parameters of the target live object.
In response to a trigger operation to the target position of the first live view, the target mark floating window is displayed, but the target mark parameter is not determined, and the method is not limited to marking the target live view in the second live view after the first live view by the first mark method. And marking the target live broadcast object in the second live broadcast picture according to the target marking mode indicated by the target marking parameter under the condition that the target marking parameter is determined through the target marking floating window. The specific marking method of the first marking method is not limited herein, and may be either a contour marking, a region marking, or a combination of a contour marking and a region marking. And when the marking parameters are not determined, marking the target live broadcast object by using a first marking mode, and visually confirming the live broadcast object to be marked and displayed.
In the embodiment of the application, the method for determining the target live object by determining the marking floating window by responding to the triggering operation of the target position on the live position displays the marking floating window for determining the marking parameter, and the target live object is marked by using the target marking parameter in the live image so as to realize the highlighting of the target main broadcasting object in the live image, thereby realizing the attention to the target live object in the plurality of main broadcasting objects.
As an alternative embodiment, as shown in fig. 3, displaying the target mark floating window in response to a trigger operation performed on the target position in the first direct-play screen includes:
s302, determining a target position identification of a target position in a first direct broadcast picture;
s304, searching first target contour data matched with the target position identification in first contour data carried by first picture data corresponding to a first direct broadcast picture, wherein the first contour data comprises object contour data corresponding to at least two direct broadcast objects in the first direct broadcast picture;
s306, under the condition that first target contour data are found, target object identifiers corresponding to the first target contour data are obtained from the first contour data, wherein the first contour data comprise the corresponding relation between the object contour data corresponding to at least two live broadcast objects in a first direct broadcast picture and the corresponding object identifiers;
s308, displaying the target object identification in the target mark floating window.
The contour data which is not used for display is not limited to be carried in the picture data corresponding to each live broadcast picture. The profile data comprises the object profile data of each live object in the live image and the corresponding relation of the object identification of the live object. And determining the target live broadcast object matched with the target position according to the target position identification of the target position. The object contour data matching the target location identity is not limited to finding object contour data comprising the target location identity. The object contour data including the target position identification is not limited to the target position identification being located in the contour position identification of the object contour data, but may be the target position identification being located within the surrounding area of the object contour data.
The object contour data is used for indicating the object outline of the corresponding live object, and the object contour data is not limited to include a plurality of contour position data indicating one complete surrounding area, and the object contour of the live object is constructed by sequentially connecting the plurality of contour position data.
The object identification is used for referring to live objects, and the object identification of each live object is different from each other. Each live object is distinguished among the plurality of live objects by object identification. In the mark floating window for determining the identification parameter, the object identification is not limited to display object identification, so that the current adjustment and the object identification of the determined target live object are determined, and the adjustment and the determination of the mark parameter are ensured, and the live object is confirmed again before the adjustment and the determination of the mark parameter are performed.
The display of the indicia floating window is not limited to that shown in fig. 4. The triggering operation is performed on the target position 402 of the first direct broadcast picture 400, and the first target contour data matched with the position identification of the target position is determined by searching in the first contour data carried by the first direct broadcast picture 400 through the position identification of the target position 402, so that the target object identification is determined, and the target direct broadcast object is determined. When the second live broadcast screen 410 is displayed, the mark floating window 412 is displayed in a superimposed manner, and the object identifier "X" of the target live broadcast object, the marking mode and the marking parameters for marking the target live broadcast object are displayed in the mark floating window 412. The marking method is not limited to be determined from a plurality of predetermined marking methods, and the marking parameter is a parameter corresponding to the marking method. The adjustment and determination modes of the marking parameters are not limited herein, and the pull-down options and the slide bars are merely examples of adjustment and determination modes of the marking parameters, but may be other forms capable of realizing parameter adjustment and determination, such as a numerical addition-subtraction control, an addition-subtraction slide bar control, a numerical filling control, and the like, which are not limited herein. The location of the marker floating window 412 is also merely an example, and the marker floating window 412 may also be located at various edge locations of the live view or other locations adjacent to the target live object.
And determining first target contour data corresponding to the target position identification through the first contour data of the first direct broadcast picture, so as to determine a target direct broadcast object corresponding to the target position according to the object identification corresponding to the first target contour data. And under the condition that the live broadcast object uniquely corresponds to the object identification, the mark display of the target live broadcast object in the second live broadcast picture after the first live broadcast picture is realized according to the object identification.
As an alternative embodiment, as shown in fig. 5, in response to a confirmation operation of the target marking parameter in the target marking floating window, marking the target live object according to the target marking parameter in the second live view includes:
s502, searching second target contour data corresponding to the target object identifier in second contour data carried by second picture data corresponding to a second live broadcast picture, wherein the second target contour data is used for indicating the contour position of a target live broadcast object in the second live broadcast picture;
s504, marking the target outline of the target live broadcast object indicated by the second target outline data according to the target marking parameter in the second live broadcast picture.
And searching target contour data corresponding to the target object identification in second contour data carried by each second live broadcast picture behind the first live broadcast picture through the object identification of the target live broadcast object determined from the first contour data, thereby determining the picture position to be marked and displayed in the second live broadcast picture.
And in the case of determining the picture position of the marker display in the second live broadcast picture, performing display adjustment on the target profile data by using the target marker parameter, thereby obtaining a second target picture for display in the client. And displaying the target live broadcast object in the second target picture in a marked mode. And performing display adjustment on the target live broadcast object on each second live broadcast picture positioned behind the first live broadcast picture in the live broadcast data stream so as to achieve the effect of marking the target live broadcast object in the second live broadcast picture.
As an optional implementation manner, marking the target contour of the target live object indicated by the second target contour data according to the target marking parameter includes:
marking a target contour according to contour marking parameters, wherein the contour marking parameters comprise contour color parameters and contour brightness parameters; and/or
And filling the contour surrounding area of the target contour according to contour filling parameters, wherein the contour filling parameters are used for indicating the rendering mode of the contour surrounding area and/or the color of the protruding mark of the contour surrounding area.
Marking the target contour of the target live object according to the contour marking parameters is not limited to highlighting the target contour according to the contour marking parameters. In the case where the target outline is not highlighted, the target outline is not limited to not being displayed on the live view. Highlighting the target contour according to the contour marking parameters is not limited to adjusting the contour color and the contour brightness of the target contour on the basis of displaying the target contour on the live broadcast picture, and the contour line parameters can also be adjusted. The contour lines are not limited to the contour line type, thickness, transparency, etc. indicating the target contour. The contour color is not limited to the line color indicating the contour line. The contour luminance is not limited to the display luminance indicating the contour line, and for example, the display luminance of the target contour is set to be high.
And filling the contour surrounding area of the target contour according to the contour filling parameters so as to highlight the target live broadcast object in the live broadcast picture through the area filling of the contour surrounding area. The contour fill parameter is used to indicate the rendering mode and the highlighting color of the contour surrounding area. The rendering method is not limited to adding a filter effect of a filter to the contour surrounding area. The highlighting color is not limited to color filling the contour surrounding area. The contour bounding parameter may also include a region luminance for indicating a display luminance of the contour bounding region. Different contour surrounding parameters may coexist, for example, setting the contour surrounding area to a highlight yellow display while adding a filter effect to the contour surrounding area.
Displaying the marked target live object in the second live view is not limited to that shown in fig. 6. In the case where the trigger operation is performed on the target position 402 of the first live view 400, after the marking parameter for marking the target live object is determined through the marking floating window, the marked target live object is displayed in the second live view 600. In the second live view screen 600, a target outline 610 is displayed in a marked manner in which the outline of the target live object is marked and determined as an outline broken line, and the target outline 610 is displayed in the second live view screen in a broken line type. The marker display for the target live object may also be a setting of other parameters for the target outline 610, as well as a cumulative setting of various parameters, such as red highlighting, outline bolding. The display of the mark of the target live broadcast object may be also a display of the mark of the contour surrounding area 620, for example, a layer of filter is added to the contour surrounding area 620, and the display brightness of the contour surrounding area is adjusted. Marking of the target contour 610 and the contour surrounding area 620 may be accomplished simultaneously, thereby increasing the marking intensity.
As an alternative embodiment, as shown in fig. 7, before displaying the first on-air screen being played, the method further includes:
s702, under the condition that a recorded live broadcast picture is obtained, carrying out object identification on at least two live broadcast objects in the live broadcast picture to obtain object contour data and object feature data of each live broadcast object, wherein the object contour data are used for indicating the contour position of the live broadcast object in the live broadcast picture, and the object feature data are used for determining the live broadcast object from the at least two live broadcast objects;
s704, determining the corresponding relation between the object contour data and the object identifier of each live broadcast object according to the position comparison result of the object feature data and the object contour data;
s706, constructing the outline data corresponding to the live broadcast picture by utilizing the corresponding relation between the object outline data of each live broadcast object and the object identification.
The profile data corresponding to the live broadcast picture is not limited to the client used by the plurality of live broadcast objects, and when the shot picture comprising the plurality of live broadcast objects is acquired, object identification is performed on the plurality of live broadcast objects in the live broadcast picture so as to acquire object profile data and object characteristic data of each live broadcast object. The object feature data is not limited to use in identifying a live object from a plurality of live objects. In the case where the live object is a anchor, the object feature data is not limited to face feature data, and is used to distinguish a plurality of anchors located in the same scene.
Object recognition of a plurality of live objects in a live view is not limited to human image recognition and human face recognition. Contour data of each anchor in each frame of picture is obtained by image recognition, not limited to contour range data. The face feature points of each anchor in each frame of picture are acquired through face recognition, and the method is not limited to extracting a plurality of face key feature points from the face feature points as feature data for distinguishing each anchor. Meanwhile, different anchor in the live broadcast picture are distinguished, corresponding object identifiers are determined for each anchor, for example, the object identifiers of the anchor are determined by using numbers, letters and the like, and the corresponding relation between the anchor and the object identifiers is recorded.
As an optional implementation manner, determining the correspondence between the object profile data and the object identifier of each live broadcast object according to the position comparison result of the object feature data and the object profile data includes:
s1, comparing feature positions of object feature data with contour positions of object contour data of at least two live broadcast objects to determine the object contour data comprising the feature positions of the object feature data;
s2, determining that the object identification corresponding to the object contour data and the object feature data has a corresponding relation.
The comparison of the object feature data and the object contour data in the live broadcast picture is not limited to the comparison of the data positions, but is not limited to the sequential matching of the face feature data of each anchor with the contour data of each anchor so as to determine the contour data matched with the face feature data and further determine the corresponding relation between the contour data of each anchor and each anchor.
And putting the matching relation between the anchor contour data and the face characteristic data in each frame of picture and the object identification into the contour data of each frame of picture. The profile data of each frame of picture is not limited to be carried at the tail of the data, so that the profile data and the live broadcast picture are synchronously sent to the server, and the server pushes the live broadcast data stream to the client requesting to watch the live broadcast.
Under the condition that clients in live broadcast service are divided into a push client used by a plurality of live broadcast objects and a viewing client requesting to view live broadcast, the display of marks of the live broadcast objects is not limited to that the push client determines the corresponding relation between the main broadcast contour data and the main broadcast of each live broadcast object in each frame of picture through portrait identification and face recognition after acquiring a shot picture comprising the plurality of live broadcast objects, builds the contour data of each frame of picture, and adds the contour data into the live broadcast stream data comprising the live broadcast picture and the live broadcast audio. The live stream data is sent to a live stream service server by a push client, the live stream data is pushed to a viewing client requesting to view live streams by the live stream service server, live frames are displayed in the viewing client, the viewing client determines marked target live objects through outline data of the live frames in response to triggering operation executed on target positions in the live frames, the target live objects are marked in a corresponding marking mode, the marked target live objects are displayed in the live frames, and marking display of the target live objects in the viewing client is achieved. In the case of displaying a live view pushed by a server of a live service in a push client, the push client may also perform marker display of a target live object. The tag display of the target live object in the live image is single-ended tag display executed by the client, is not transmitted through the server of the live service, and does not affect the live images displayed in other clients which are also watching live.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
According to another aspect of the embodiment of the present invention, there is also provided a tag display device for a live object for implementing the tag display method for a live object. As shown in fig. 8, the apparatus includes:
a first display unit 802, configured to display a first direct-broadcast picture being played, where the first direct-broadcast picture includes at least two direct-broadcast objects;
the second display unit 804 is configured to respond to a triggering operation performed on a target position in the first live view, and mark a target live object in a second live view, where the second live view is a live view located after the first live view in the live data stream, and the target live object is a live object that is determined from at least two live objects and matches the target position.
Optionally, the second display unit 804 includes:
the floating window module is used for responding to the triggering operation executed on the target position in the first direct broadcast picture and displaying a target mark floating window, wherein the target mark floating window is used for determining the mark parameter for marking the target direct broadcast object;
and the confirmation module is used for responding to the confirmation operation of the target marking parameters in the target marking floating window and marking the target live broadcast object in the second live broadcast picture according to the target marking parameters.
Optionally, the floating window unit includes:
the position module is used for determining a target position identifier of a target position in the first direct broadcast picture;
the first searching module is used for searching first target contour data matched with the target position identification in first contour data carried by first picture data corresponding to the first direct broadcast picture, wherein the first contour data comprises object contour data corresponding to at least two direct broadcast objects in the first direct broadcast picture;
the identification module is used for acquiring target object identifications corresponding to first target contour data from the first contour data under the condition that the first target contour data are found, wherein the first contour data comprise the corresponding relation between the object contour data corresponding to at least two live broadcast objects in a first direct broadcast picture and the corresponding object identifications;
And the mark display module is used for displaying the target object mark in the target mark floating window.
Optionally, the second display unit 804 includes:
the second searching module is used for searching second target contour data corresponding to the target object identifier in second contour data carried by second picture data corresponding to a second live broadcast picture, wherein the second target contour data is used for indicating the contour position of the target live broadcast object in the second live broadcast picture;
and the mark display module is used for marking the target outline of the target live broadcast object indicated by the second target outline data according to the target mark parameter in the second live broadcast picture.
Optionally, the mark display module includes: marking a target contour according to contour marking parameters, wherein the contour marking parameters comprise contour color parameters and contour brightness parameters; and/or filling a contour surrounding area of the target contour according to contour filling parameters, wherein the contour filling parameters are used for indicating a rendering mode of the contour surrounding area and/or a protruding mark color of the contour surrounding area;
optionally, before displaying the first live image being played, the marker display device of the live object further includes an identification unit, configured to perform object identification on at least two live objects in the live image under the condition that the recorded live image is obtained, to obtain object contour data and object feature data of each live object, where the object contour data is used to indicate a contour position of the live object in the live image, and the object feature data is used to determine the live object from the at least two live objects; determining the corresponding relation between the object contour data and the object identifier of each live broadcast object according to the position comparison result of the object feature data and the object contour data; and constructing the outline data corresponding to the live broadcast picture by utilizing the corresponding relation between the object outline data of each live broadcast object and the object identifier.
Optionally, the identifying unit is further configured to compare a feature position of the object feature data with contour positions of object contour data of at least two live objects, so as to determine object contour data including the feature position of the object feature data; and determining that the object identification corresponding to the object contour data and the object feature data has a corresponding relation.
In the embodiment of the application, the triggering operation is performed in response to the target position in the first live broadcast picture which is being played and comprises at least two live broadcast objects in the same environment, the target live broadcast objects which are determined from the at least two live broadcast objects and are matched with the target position are marked in the second live broadcast picture which is positioned behind the first live broadcast picture in the live broadcast data stream, the target position of the first live broadcast picture of the complex scene which comprises at least two live broadcast objects is triggered, so that the target live broadcast objects which are matched with the target position are marked and displayed in the second live broadcast picture which is positioned behind the first live broadcast picture, the purpose of marking and displaying the specific objects in the complex scene which comprises at least two live broadcast objects is achieved, the technical effect that the specific objects in the complex scene are focused through the marked and the technical problem that the specific objects are focused and are difficult to realize in the complex scene is solved.
According to still another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the method for displaying a marker of a live object, where the electronic device may be a terminal device or a server as shown in fig. 1. The present embodiment is described taking the electronic device as a terminal device as an example. As shown in fig. 9, the electronic device comprises a memory 902 and a processor 904, the memory 902 having stored therein a computer program, the processor 904 being arranged to perform the steps of any of the method embodiments described above by means of the computer program.
Alternatively, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of the computer network.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects;
s2, responding to a triggering operation executed on a target position in a first live broadcast picture, and marking a target live broadcast object in a second live broadcast picture, wherein the second live broadcast picture is a live broadcast picture positioned behind the first live broadcast picture in a live broadcast data stream, and the target live broadcast object is a live broadcast object which is determined from at least two live broadcast objects and is matched with the target position.
Alternatively, it will be understood by those skilled in the art that the structure shown in fig. 9 is only schematic, and the electronic device may also be a terminal device such as a smart phone (e.g. an Android phone, an IOS phone, etc.), a tablet computer, a palm computer, and a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 9 is not limited to the structure of the electronic device described above. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 9, or have a different configuration than shown in FIG. 9.
The memory 902 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for displaying a marker of a live object in the embodiment of the present invention, and the processor 904 executes the software programs and modules stored in the memory 902, thereby executing various functional applications and data processing, that is, implementing the method for displaying a marker of a live object. The memory 902 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 902 may further include memory remotely located relative to the processor 904, which may be connected to the terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 902 may specifically store, but is not limited to, information such as the first screen, the second screen, the target location, and object information of the target live object. As an example, as shown in fig. 9, the memory 902 may include, but is not limited to, a first display unit 802 and a second display unit 804 in a marker display device including the live object. In addition, other module units in the tag display device of the live broadcast object may be further included, but are not limited to, and are not described in detail in this example.
Optionally, the transmission device 906 is used to receive or transmit data via a network. Specific examples of the network described above may include wired networks and wireless networks. In one example, the transmission means 906 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices and routers via a network cable to communicate with the internet or a local area network. In one example, the transmission device 906 is a Radio Frequency (RF) module for communicating wirelessly with the internet.
In addition, the electronic device further includes: a display 908 for displaying the first and second images; and a connection bus 910 for connecting the respective module parts in the above-described electronic device.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. Among them, the nodes may form a Peer-To-Peer (P2P) network, and any type of computing device, such as a server, a terminal, etc., may become a node in the blockchain system by joining the Peer-To-Peer network.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in various alternative implementations of the marker display aspect of the live object described above. Wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects;
s2, responding to a triggering operation executed on a target position in a first live broadcast picture, and marking a target live broadcast object in a second live broadcast picture, wherein the second live broadcast picture is a live broadcast picture positioned behind the first live broadcast picture in a live broadcast data stream, and the target live broadcast object is a live broadcast object which is determined from at least two live broadcast objects and is matched with the target position.
Alternatively, in this embodiment, it will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be performed by a program for instructing a terminal device to execute the steps, where the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The integrated units in the above embodiments may be stored in the above-described computer-readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing one or more computer devices (which may be personal computers, servers or network devices, etc.) to perform all or part of the steps of the method described in the embodiments of the present invention.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (8)

1. A marker display method of a live object, comprising:
displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects;
responding to a triggering operation executed on a target position in the first direct broadcast picture, and marking a target direct broadcast object in a second direct broadcast picture, wherein the second direct broadcast picture is a direct broadcast picture positioned behind the first direct broadcast picture in a direct broadcast data stream, and the target direct broadcast object is a direct broadcast object which is determined from the at least two direct broadcast objects and is matched with the target position;
Wherein, the marking the target live object in the second live image in response to the trigger operation performed on the target position in the first live image includes: in response to a trigger operation performed on the target position in the first direct broadcast picture, displaying a target mark floating window, wherein the target mark floating window is used for determining a mark parameter for marking the target direct broadcast object; responding to the confirmation operation of the target marking parameters in the target marking floating window, and marking the target live broadcast object in the second live broadcast picture according to the target marking parameters;
the displaying a target mark floating window in response to a trigger operation performed on the target position in the first direct-play screen includes: determining a target position identification of the target position in the first direct broadcast picture; searching first target contour data matched with the target position identification in first contour data carried by first picture data corresponding to the first direct broadcast picture, wherein the first contour data comprises object contour data corresponding to at least two direct broadcast objects in the first direct broadcast picture; under the condition that the first target contour data is found, acquiring target object identifiers corresponding to the first target contour data from the first contour data, wherein the first contour data comprises the corresponding relation between the object contour data corresponding to each of the at least two live broadcast objects in the first direct broadcast picture and the corresponding object identifiers; and displaying the target object identification in the target mark floating window.
2. The method of claim 1, wherein the marking the target live object in the second live view according to the target marking parameter in response to the validation operation of the target marking parameter in the target marking floating window comprises:
searching second target contour data corresponding to the target object identifier in second contour data carried by second picture data corresponding to the second live broadcast picture, wherein the second target contour data is used for indicating the contour position of the target live broadcast object in the second live broadcast picture;
and marking the target outline of the target live broadcast object indicated by the second target outline data according to the target marking parameter in the second live broadcast picture.
3. The method of claim 2, wherein marking the target contour of the target live object indicated by the second target contour data according to the target marking parameter comprises:
marking the target contour according to contour marking parameters, wherein the contour marking parameters comprise contour color parameters and contour brightness parameters; and/or
And filling the contour surrounding area of the target contour according to contour filling parameters, wherein the contour filling parameters are used for indicating the rendering mode of the contour surrounding area and/or the protruding mark color of the contour surrounding area.
4. The method of claim 1, further comprising, prior to displaying the first always-on picture being played:
under the condition that a recorded live broadcast picture is obtained, carrying out object identification on at least two live broadcast objects in the live broadcast picture to obtain object contour data and object feature data of each live broadcast object, wherein the object contour data are used for indicating the contour position of the live broadcast object in the live broadcast picture, and the object feature data are used for determining the live broadcast object from the at least two live broadcast objects;
determining the corresponding relation between the object contour data and the object identifier of each live broadcast object according to the position comparison result of the object feature data and the object contour data;
and constructing the outline data corresponding to the live broadcast picture by utilizing the corresponding relation between the object outline data of each live broadcast object and the object identifier.
5. The method according to claim 4, wherein determining the correspondence between the object profile data and the object identifier of each live object according to the position comparison result of the object feature data and the object profile data comprises:
Comparing the feature position of the object feature data with the contour positions of the object contour data of the at least two live objects to determine object contour data comprising the feature position of the object feature data;
and determining that the object contour data and the object identifier corresponding to the object feature data have a corresponding relation.
6. A marker display device for a live object, comprising:
the first display unit is used for displaying a first direct-broadcasting picture which is being played, wherein the first direct-broadcasting picture comprises at least two direct-broadcasting objects;
a second display unit, configured to respond to a triggering operation performed on a target position in the first live broadcast picture, and mark a target live broadcast object in a second live broadcast picture, where the second live broadcast picture is a live broadcast picture located after the first live broadcast picture in a live broadcast data stream, and the target live broadcast object is a live broadcast object that is determined from the at least two live broadcast objects and matches the target position;
wherein the second display unit includes: a floating window module, configured to display a target mark floating window in response to a trigger operation performed on the target position in the first direct broadcast picture, where the target mark floating window is used to determine a mark parameter for marking the target direct broadcast object; the confirming module is used for responding to the confirming operation of the target marking parameters in the target marking floating window and marking the target live broadcast object in the second live broadcast picture according to the target marking parameters;
Wherein the floating window module is configured to display a target mark floating window in response to a trigger operation performed on the target position in the first direct-play screen by: determining a target position identification of the target position in the first direct broadcast picture; searching first target contour data matched with the target position identification in first contour data carried by first picture data corresponding to the first direct broadcast picture, wherein the first contour data comprises object contour data corresponding to at least two direct broadcast objects in the first direct broadcast picture; under the condition that the first target contour data is found, acquiring target object identifiers corresponding to the first target contour data from the first contour data, wherein the first contour data comprises the corresponding relation between the object contour data corresponding to each of the at least two live broadcast objects in the first direct broadcast picture and the corresponding object identifiers; and displaying the target object identification in the target mark floating window.
7. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any one of claims 1 to 5.
8. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 5 by means of the computer program.
CN202210082189.8A 2022-01-24 2022-01-24 Method and device for displaying marks of live objects, storage medium and electronic equipment Active CN114501051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210082189.8A CN114501051B (en) 2022-01-24 2022-01-24 Method and device for displaying marks of live objects, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210082189.8A CN114501051B (en) 2022-01-24 2022-01-24 Method and device for displaying marks of live objects, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN114501051A CN114501051A (en) 2022-05-13
CN114501051B true CN114501051B (en) 2024-02-02

Family

ID=81474569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210082189.8A Active CN114501051B (en) 2022-01-24 2022-01-24 Method and device for displaying marks of live objects, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114501051B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115348468A (en) * 2022-07-22 2022-11-15 网易(杭州)网络有限公司 Live broadcast interaction method and system, audience live broadcast client and anchor live broadcast client

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9390506B1 (en) * 2015-05-07 2016-07-12 Aricent Holdings Luxembourg S.A.R.L. Selective object filtering and tracking
CN108337471A (en) * 2017-02-24 2018-07-27 腾讯科技(深圳)有限公司 The processing method and processing device of video pictures
CN108538370A (en) * 2018-03-30 2018-09-14 北京灵医灵科技有限公司 A kind of illumination volume drawing output method and device
CN110287934A (en) * 2019-07-02 2019-09-27 北京搜狐互联网信息服务有限公司 A kind of method for checking object, device, client and server
CN111243105A (en) * 2020-01-15 2020-06-05 Oppo广东移动通信有限公司 Augmented reality processing method and device, storage medium and electronic equipment
CN111417028A (en) * 2020-03-13 2020-07-14 腾讯科技(深圳)有限公司 Information processing method, information processing apparatus, storage medium, and electronic device
CN111773694A (en) * 2020-07-10 2020-10-16 腾讯科技(深圳)有限公司 Control method and device of virtual operation object and storage medium
CN112752116A (en) * 2020-12-30 2021-05-04 广州繁星互娱信息科技有限公司 Display method, device, terminal and storage medium of live video picture
CN113382275A (en) * 2021-06-07 2021-09-10 广州博冠信息科技有限公司 Live broadcast data generation method and device, storage medium and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9390506B1 (en) * 2015-05-07 2016-07-12 Aricent Holdings Luxembourg S.A.R.L. Selective object filtering and tracking
CN108337471A (en) * 2017-02-24 2018-07-27 腾讯科技(深圳)有限公司 The processing method and processing device of video pictures
CN108538370A (en) * 2018-03-30 2018-09-14 北京灵医灵科技有限公司 A kind of illumination volume drawing output method and device
CN110287934A (en) * 2019-07-02 2019-09-27 北京搜狐互联网信息服务有限公司 A kind of method for checking object, device, client and server
CN111243105A (en) * 2020-01-15 2020-06-05 Oppo广东移动通信有限公司 Augmented reality processing method and device, storage medium and electronic equipment
CN111417028A (en) * 2020-03-13 2020-07-14 腾讯科技(深圳)有限公司 Information processing method, information processing apparatus, storage medium, and electronic device
CN111773694A (en) * 2020-07-10 2020-10-16 腾讯科技(深圳)有限公司 Control method and device of virtual operation object and storage medium
CN112752116A (en) * 2020-12-30 2021-05-04 广州繁星互娱信息科技有限公司 Display method, device, terminal and storage medium of live video picture
CN113382275A (en) * 2021-06-07 2021-09-10 广州博冠信息科技有限公司 Live broadcast data generation method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN114501051A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN108737882B (en) Image display method, image display device, storage medium and electronic device
CN108282395B (en) Message interaction method and related device, communication system and computer storage medium
CN111897507B (en) Screen projection method and device, second terminal and storage medium
US20160295269A1 (en) Information pushing method, device and system
EP2819416A1 (en) Media sharing
JP7463519B2 (en) Method, device, equipment and medium for realizing video-based interaction
CN108696489B (en) Media information playing method and device
CN108632666B (en) Video detection method and video detection equipment
CN104301769A (en) Image presenting method, terminal device and server
CN114501051B (en) Method and device for displaying marks of live objects, storage medium and electronic equipment
CN109389550B (en) Data processing method, device and computing equipment
EP4191513A1 (en) Image processing method and apparatus, device and storage medium
CN113891105A (en) Picture display method and device, storage medium and electronic equipment
CN110971830B (en) Anti-shake method for video shooting and related device
CN110290192B (en) Block chain-based data distributed storage and data acquisition method and device
CN110049094B (en) Information pushing method and offline display terminal
CN106792237B (en) Message display method and system
EP3007098A1 (en) Method, device and system for realizing visual identification
CN113296666A (en) Anchor exposure data reporting method and device, terminal equipment and storage medium
CN111145189B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN112437332B (en) Playing method and device of target multimedia information
CN110599525A (en) Image compensation method and apparatus, storage medium, and electronic apparatus
CN109999490B (en) Method and system for reducing networking cloud application delay
CN110958448B (en) Video quality evaluation method, device, medium and terminal
CN110677692B (en) Video decoding method and device and video encoding method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant