CN114501050B - Method and device for outputting information - Google Patents

Method and device for outputting information Download PDF

Info

Publication number
CN114501050B
CN114501050B CN202210079001.4A CN202210079001A CN114501050B CN 114501050 B CN114501050 B CN 114501050B CN 202210079001 A CN202210079001 A CN 202210079001A CN 114501050 B CN114501050 B CN 114501050B
Authority
CN
China
Prior art keywords
participating audience
audience member
participating
audiences
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210079001.4A
Other languages
Chinese (zh)
Other versions
CN114501050A (en
Inventor
冼钊铭
马饮泉
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210079001.4A priority Critical patent/CN114501050B/en
Publication of CN114501050A publication Critical patent/CN114501050A/en
Application granted granted Critical
Publication of CN114501050B publication Critical patent/CN114501050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure provides methods and apparatus for outputting information, relates to the field of artificial intelligence, and in particular, to the field of live broadcast. The specific implementation scheme is as follows: calculating the number of pixels occupied by the text to be output; selecting a participating audience member from the audience member set based on the number; arranging the head portraits of each participated audience according to the stroke shape of the text to generate barrage information; and outputting the barrage information in response to detecting the display opportunity. The implementation mode can promote the conversion of the audience from offline to online, and meanwhile, the interaction between the online audiences can still be felt, the understanding is deepened, and the whole atmosphere in the living room is increased.

Description

Method and device for outputting information
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the field of live broadcasting, and more particularly, to a method and apparatus for outputting information.
Background
At present, various popular concerts or performances of people are converted into online live broadcast modes, people can watch the concerts at home and interact with the stars online in real time (such as sending gifts, barrage speaking, vermicelli groups and the like) to enjoy the performances of the stars. However, people watch live atmosphere on line, and compared with off line, the live atmosphere watching method lacks the effects of interaction, shouting, atmosphere creation and the like among many live audience.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, storage medium, and computer program product for outputting information.
According to a first aspect of the present disclosure, there is provided a method for outputting information, comprising: calculating the number of pixels occupied by the text to be output; selecting a participating audience member from the audience member set based on the number; arranging the head portraits of each participated audience according to the stroke shape of the text to generate barrage information; and outputting the barrage information in response to detecting the display opportunity.
According to a second aspect of the present disclosure, there is provided an apparatus for outputting information, comprising: a calculation unit configured to calculate the number of pixels occupied by a text to be output; a selection unit configured to select participating viewers from a set of viewers according to the number; an arrangement unit configured to arrange head portraits of each participating audience member in a stroke shape of the text to generate bullet screen information; and an output unit configured to output the bullet screen information in response to detecting the presentation opportunity.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the first aspect.
According to the method and the device for outputting the information, the bullet screen information is generated through the head portraits of the audience, the fluorescence in the offline activities is simulated in the live broadcasting room, the atmosphere creation of the online live broadcasting room is improved, the relativity between the vermicelli is improved, and the vermicelli can get a display opportunity.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a method for outputting information according to the present disclosure;
FIG. 3 is a flow chart of yet another embodiment of a method for outputting information according to the present disclosure;
fig. 4a, 4b are schematic diagrams of one application scenario of a method for outputting information according to the present disclosure;
FIG. 5 is a schematic structural diagram of one embodiment of an apparatus for outputting information according to the present disclosure;
fig. 6 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the methods of the present disclosure for outputting information or apparatuses for outputting information may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a live broadcast type application, a web browser application, a shopping type application, a search type application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting video playback, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic video expert compression standard audio plane 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
The server 105 may be a server providing various services, such as a background live server providing support for live rooms displayed on the terminal devices 101, 102, 103. The background live broadcast server can analyze and process the received data such as the interaction request and the like, and feed back the processing result (such as bullet screen information) to the terminal equipment.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (e.g., a plurality of software or software modules for providing distributed services), or as a single software or software module. The present invention is not particularly limited herein. The server may also be a server of a distributed system or a server that incorporates a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should be noted that, the method for outputting information provided by the embodiments of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for outputting information is generally provided in the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for outputting information according to the present disclosure is shown. The method for outputting information comprises the following steps:
In step 201, the number of pixels occupied by the text to be output is calculated.
In this embodiment, the execution subject of the method for outputting information (e.g., the server shown in fig. 1) may issue text for interaction, such as the name of the star, or some number of the numbers. As shown in fig. 4a, sampling calculation can be performed according to a text such as a "mountain river" word, and the number of lighted pixels actually required by the "mountain river" word is calculated according to a rule that 16x 16 pixels can display one Chinese character or 2 letters. 53 pixels are required to be fully displayed (e.g., dots except for middle M in fig. 4 a).
Step 202, selecting participating audience members from a set of audience members based on the number.
In this embodiment, the calculated pixel points are the number of the lightened bullet screen information of the simulated scene [ fluorescence lifting ]. The audience can apply to participate in the interaction, and can use the default head portrait, and can also re-upload the head portrait (the head portrait is the detected qualified image and accords with the national regulation). The audience can click the [ fluorescence lifting ] application adding button in the live broadcasting room, call up the camera or album, select the picture or the video in 3s and upload as the head portrait. After the uploading is successful, the direct broadcasting room prompts the audience [ waiting for the star interaction ], and the audience is lifted together ].
If the number of the audience members who apply for participation in the interactive activity exceeds the required number of pixels, the participating audience members can be selected. A sufficient number of participating audience members may be selected at random, or may be selected in the order of the application.
If the participating audience members do not reach the desired number of pixels, then waiting for new audience members to join continues until the participating audience members reach the desired number of pixels, and step 203 is not performed. The number of audience segments may also be randomly selected.
Step 203, arranging the head portraits of each participating audience member according to the stroke shape of the text to generate barrage information.
In this embodiment, each head represents a pixel point, which is used to compose a stroke of the text, as shown in FIG. 4 a. The head portraits can be randomly selected for arrangement, and can be arranged according to a certain rule. For example, the dark pixels are scattered at different positions by arranging the colors, the brightness and the like of the head portrait, so that a certain stroke cannot be seen clearly due to the fact that the dark pixels are concentrated together.
Step 204, outputting bullet screen information in response to detecting the display opportunity.
In this embodiment, when the number of viewers applied reaches the required number of words to be lit, the card-lifting timing has several strategies:
And (3) identifying pictures or sounds extracted from the live video stream pictures, and triggering the [ fluorescence lifting ] display time when singing high tide. All audiences in the whole live broadcasting room can see the character pattern to light up, and a plurality of audience head images which just apply for playing cards are arranged on the character pattern.
The target scene may be detected as a presentation opportunity by an image recognition technique from a picture extracted from a live video stream picture, e.g., a lifting frame is raised. Keywords may also be detected from the extracted sounds by speech recognition techniques, and the presentation triggered when keywords are detected. Keywords may be pre-set according to lyrics, which are commonly chorused together by the audience at the time of the live concert.
The on-site staff triggers the card lifting time: under this scene, be fit for a plurality of performers and show on the same platform, then along with the steering of camera lens, the staff triggers the [ fluorescence of corresponding performer to lift the tablet ] show opportunity now.
The lighted text may be output by way of bullet screen information.
According to the method provided by the embodiment of the invention, atmosphere creation of the on-line concert direct broadcast room can be improved, the relativity between the vermicelli is improved, and the vermicelli can be provided with a display opportunity.
In some optional implementations of this embodiment, generating bullet screen information by arranging the avatars of each participating audience member in the stroke shape of the text includes: obtaining the geographic positions of the number of participating audience members; calculating the position weight of each participating audience member according to the geographic positions and live broadcast live positions of the plurality of participating audience members; for each participating audience member, the position of the head portrait of the participating audience member in the barrage information is distributed according to the position weight of the participating audience member, so that the position of the head portrait is closer to the center of the barrage information when the position weight is larger.
The server can calculate the straight line distance between the geographical position of the audience according to the geographical position information of each application (fluorescence name plate) (the user starts position information authorization or is acquired according to the network IP, base station and the like of the user at the time), and then calculates the straight line distance between the geographical position of the audience and the Shenzhen XXX company according to the current live broadcast position of the star (such as Shenzhen XX company, M point in fig. 4 a). And then quantized to a position weight of 0,1, which is the position distance parameter value of the viewer from the star, which is later used to display the actual position of the viewer in the lit font, as shown in fig. 4a, viewer B is closer to the live spot than a. The bigger the position weight is, the closer the position of the head portrait is to the center of the barrage information. The arrangement mode can shorten the distance between the audience and the site, so that the audience has more participation and the user experience is improved.
In some optional implementations of this embodiment, generating bullet screen information by arranging the avatars of each participating audience member in the stroke shape of the text includes: acquiring interaction records among the plurality of participated audiences; calculating the relation weight of each participated audience according to the interaction record; for each participating audience member, the position of the head portrait of the participating audience member in the barrage information is distributed according to the relation weight of the participating audience member, so that the position of the head portrait is closer to the center of the barrage information when the relation weight is larger.
The audience of each application (fluorescence playing card) may have knowledge or interaction, such as the relationship of adding friends to each other, or the relationship of focusing on each other, unidirectional focusing on each other, private chat recording, group chat recording, etc., and they apply for the application (fluorescence playing card) at the same time, then the server side calculates (for example, accumulates all interaction times) with all the application users according to each application user, and finally quantifies into a relationship weight value of [0,1] for displaying the actual position of the application user in the lighted font later. The more viewers interact with other viewers the higher the relationship weight. The greater the relation weight, the closer the position of the head portrait is to the center of the barrage information. The rewards obtained by the audiences with positive interaction are that the head portraits are close to the center of the barrage information, so that the interaction among the audiences can be promoted, and the enthusiasm of the interaction among the audiences is improved.
Alternatively, a weighted sum of the location weight and the relationship weight may be calculated as the total weight. The head portraits are arranged in a near-to-far mode from the center of the barrage information according to the sequence from the big to the small of the total weight. The arrangement mode considers the positions of audiences and the interactivity of the audiences, on one hand, the emotion among the fans is shortened, and on the other hand, the atmosphere of the concert is improved.
In some optional implementations of this embodiment, generating bullet screen information by arranging the avatars of each participating audience member in the stroke shape of the text includes: the head portraits of the participating audience members with the interactive records are distributed at adjacent positions. As shown in fig. 4a, viewers C and D have previously interacted with each other to assign their avatars to adjacent positions. Thus, the approach feeling between the vermicelli can be enhanced, the interaction effect is improved, and the audience can be personally on the scene to enjoy the field effect.
In some alternative implementations of the present embodiment, selecting a participating audience member from the audience member set based on the number includes: issuing a text of the interactive activity; receiving an interactive request of at least one audience, wherein the interactive request comprises a geographic location and any one of the following: a picture or video; selecting the number of participating audience members according to the sequence of receiving the interaction request; taking pictures or videos in the interaction request of the participating audience as the head portraits of the participating audience. The server issues text content which can generate barrage information in advance, and audiences can register to participate in interaction after seeing the text content. Clicking the participation option in the live broadcasting room pops up a window for uploading the picture or the video, and the audience confirms the submission after uploading the picture or the video. The audience can also manually submit the geographic position, or the authorization server obtains the geographic position through GPS positioning, IP address and the like. And the live broadcast software generates an interaction request according to the picture or the video and the geographic position and sends the interaction request to the server. The server selects the number of participating viewers according to the order in which the interactive requests are received. The pictures or videos submitted by the participating audience members are used as the avatars of the participating audience members. Preventing the head portraits of the audience from being lost or too dark, and leading to the generation of the bullet screen information. In addition, submitting short videos can generate dynamic barrage information, and each audience uploaded video is automatically played when the barrages are popped up.
With further reference to fig. 3, a flow 300 of yet another embodiment of a method for outputting information is shown. The flow 300 of the method for outputting information comprises the steps of:
in step 301, the number of pixels occupied by the text to be output is calculated.
Step 302, selecting participating audience members from a set of audience members based on the number.
Step 303, arranging the head portraits of each participating audience member according to the stroke shapes of the text to generate barrage information.
Step 304, outputting bullet screen information in response to detecting the display opportunity.
Steps 301-304 are substantially identical to steps 201-204 and are therefore not described in detail.
In response to detecting that the avatar of the participating audience member in the bullet screen information is clicked, an operation option to interact with the clicked participating audience member is output, step 305.
In this embodiment, in the process of displaying the [ fluorescence lift ], all viewers may click on the head portrait of any participant in the [ fluorescence lift ], and output operation options interacting with the clicked participant viewer. The clicker selects an operation from the operation options to interact with the clicked participating audience members.
Therefore, the connection between the vermicelli can be enhanced, the cohesive force between the vermicelli is improved, and the atmosphere of the direct seeding room is hotter.
In some alternative implementations of the present embodiment, the operational options include at least one of: viewing, paying attention, praise, private chat, collection, wheat linking, group photo and intimacy display. Through the interaction among the audience is enhanced in the plurality of modes, the viewing experience is improved in all aspects, the online performance can be realized in an immersive manner, and the same experience as that of the scene is obtained.
In some optional implementations of the present embodiment, the method further includes: outputting adjacent relation information to adjacent audiences if the adjacent audiences with geographic position distances smaller than a first preset value exist in the plurality of participated audiences; and if the number of interactive audiences with the interaction times larger than a second preset value exist in the plurality of participated audiences, outputting interaction relation information to the interactive audiences. If there are other participating viewers that are in close contact with the current viewer or that are geographically close, the viewer is also prompted to record XX and you together [ fluorescent listing ]. The audience can find other audiences closely related to the audience conveniently, and the live atmosphere can be enhanced by interaction.
With continued reference to fig. 4a, 4b, fig. 4a, 4b are a schematic illustration of an application scenario of the method for outputting information according to the present embodiment. In the application scenario of fig. 4a, 4b, the server issues the "fluorescence lift" content to be generated as "mountain river", and calculates that 53 participating spectators are required. A spectator who wants to participate in the card-lifting activity submits an interaction request, the server judges that 53 persons are less than the current time, the spectator is allowed to participate in the round of activity, and if 53 persons are exceeded, the spectator is informed of the next breakfast to participate in the next round of activity. Viewers allowed to participate in the event may upload images or videos as avatars. The server may also obtain a record of the geographic locations and interactions of the participating audience members and then calculate the location weight and relationship weight for each participating audience member. Some viewers upload unacceptable images or videos, and other viewers can supplement the images or videos. After collecting the head portraits of 53 participated spectators, the head portraits are arranged to generate bullet screen information of a 'mountain river' character. After detecting XXX or a live staff triggered playing display occasion in the lyrics of live broadcasting current sound, displaying [ fluorescence playing ] in a live broadcasting room in a bullet screen mode. Any audience (which can be a non-participating audience) can click on any head portrait in the figure (fluorescence lifting) and then perform interaction such as checking, focusing, praying, private chat, collection, wheat linking, group photo, intimacy display and the like.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: a calculation unit 501, a selection unit 502, an arrangement unit 503, and an output unit 504. Wherein, the calculating unit 501 is configured to calculate the number of pixels occupied by the text to be output; a selection unit 502 configured to select participating viewers from the set of viewers according to the number; an arrangement unit 503 configured to arrange the head portraits of each participating audience member in the stroke shape of the text to generate bullet screen information; an output unit 504 configured to output the bullet screen information in response to detecting the presentation opportunity.
In this embodiment, specific processes of the receiving unit calculating unit 501, the selecting unit 502, the arranging unit 503, and the outputting unit 504 of the apparatus 500 for outputting information may refer to steps 201, 202, 203, 204 in the corresponding embodiment of fig. 2.
In some optional implementations of the present embodiment, the apparatus 500 further includes an interaction unit (not shown in the drawings) configured to: in response to detecting that the avatar of the participating audience member in the bullet screen information is clicked, outputting operation options for interacting with the clicked participating audience member.
In some optional implementations of the present embodiment, the permutation unit 503 is further configured to: obtaining the geographic positions of the number of participating audience members; calculating the position weight of each participating audience member according to the geographic positions and live broadcast live positions of the plurality of participating audience members; for each participating audience member, the position of the head portrait of the participating audience member in the barrage information is distributed according to the position weight of the participating audience member, so that the position of the head portrait is closer to the center of the barrage information when the position weight is larger.
In some optional implementations of the present embodiment, the permutation unit 503 is further configured to: acquiring interaction records among the plurality of participated audiences; calculating the relation weight of each participated audience according to the interaction record; for each participating audience member, the position of the head portrait of the participating audience member in the barrage information is distributed according to the relation weight of the participating audience member, so that the position of the head portrait is closer to the center of the barrage information when the relation weight is larger.
In some optional implementations of the present embodiment, the permutation unit 503 is further configured to: the head portraits of the participating audience members with the interactive records are distributed at adjacent positions.
In some optional implementations of the present embodiment, the selection unit 502 is further configured to: issuing a text of the interactive activity; receiving an interactive request of at least one audience, wherein the interactive request comprises a geographic location and any one of the following: a picture or video; selecting the number of participating audience members according to the sequence of receiving the interaction request; taking pictures or videos in the interaction request of the participating audience as the head portraits of the participating audience.
In some alternative implementations of the present embodiment, the operational options include at least one of: viewing, paying attention, praise, private chat, collection, wheat linking, group photo and intimacy display.
In some optional implementations of the present embodiment, the output unit 504 is further configured to: outputting adjacent relation information to adjacent audiences if the adjacent audiences with geographic position distances smaller than a first preset value exist in the plurality of participated audiences; and if the number of interactive audiences with the interaction times larger than a second preset value exist in the plurality of participated audiences, outputting interaction relation information to the interactive audiences.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of flow 200 or 300.
A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of flow 200 or 300.
A computer program product comprising a computer program that when executed by a processor implements the method of flow 200 or 300.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, for example, a method for outputting information. For example, in some embodiments, the method for outputting information may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method for outputting information described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method for outputting information by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (18)

1. A method for outputting information, comprising:
Calculating the number of pixels occupied by the text to be output;
Selecting a participating audience member from the audience member set based on the number;
The head portraits of all participated audiences are arranged according to the stroke shapes of the texts to generate bullet screen information, wherein the bullet screen information is arranged according to the colors and the brightness of the head portraits, so that darker pixel points are scattered at different positions and cannot cause that a certain stroke cannot be seen clearly because of being concentrated together;
Outputting the barrage information in response to detecting the presentation opportunity;
wherein the display opportunity is detected by:
And extracting pictures or sounds from the live video stream pictures to identify, and triggering the display time when singing high tide is identified.
2. The method of claim 1, wherein the method further comprises:
In response to detecting that the avatar of the participating audience member in the bullet screen information is clicked, outputting operation options for interacting with the clicked participating audience member.
3. The method of claim 1, wherein said arranging the avatars of each participating audience member in the stroke shape of the text to generate bullet screen information comprises:
obtaining the geographic positions of the number of participating audience members;
Calculating the position weight of each participating audience member according to the geographic positions and live broadcast live positions of the plurality of participating audience members;
For each participating audience member, the position of the head portrait of the participating audience member in the barrage information is distributed according to the position weight of the participating audience member, so that the position of the head portrait is closer to the center of the barrage information when the position weight is larger.
4. The method of claim 1, wherein said arranging the avatars of each participating audience member in the stroke shape of the text to generate bullet screen information comprises:
Acquiring interaction records among the plurality of participated audiences;
Calculating the relation weight of each participated audience according to the interaction record;
For each participating audience member, the position of the head portrait of the participating audience member in the barrage information is distributed according to the relation weight of the participating audience member, so that the position of the head portrait is closer to the center of the barrage information when the relation weight is larger.
5. The method of claim 4, wherein said arranging the avatars of each participating audience member in the stroke shape of the text to generate bullet screen information comprises:
The head portraits of the participating audience members with the interactive records are distributed at adjacent positions.
6. The method of claim 1, wherein the selecting a participating audience member from the audience member set based on the number comprises:
Issuing a text of the interactive activity;
Receiving an interactive request of at least one audience, wherein the interactive request comprises a geographic location and any one of the following: a picture or video;
selecting the number of participating audience members according to the sequence of receiving the interaction request;
taking pictures or videos in the interaction request of the participating audience as the head portraits of the participating audience.
7. The method of claim 2, wherein the operational options include at least one of: viewing, paying attention, praise, private chat, collection, wheat linking, group photo and intimacy display.
8. The method of claim 1, wherein the method further comprises:
outputting adjacent relation information to adjacent audiences if the adjacent audiences with geographic position distances smaller than a first preset value exist in the plurality of participated audiences;
And if the number of interactive audiences with the interaction times larger than a second preset value exist in the plurality of participated audiences, outputting interaction relation information to the interactive audiences.
9. An apparatus for outputting information, comprising:
A calculation unit configured to calculate the number of pixels occupied by a text to be output;
A selection unit configured to select participating viewers from a set of viewers according to the number;
An arrangement unit configured to arrange head portraits of each participating audience member according to the stroke shape of the text to generate barrage information, wherein the arrangement is performed according to the color and the brightness of the head portraits, so that darker pixel points are scattered at different positions and a certain stroke cannot be seen clearly because of being concentrated together;
an output unit configured to output the bullet screen information in response to detecting a presentation opportunity;
wherein the display opportunity is detected by:
And extracting pictures or sounds from the live video stream pictures to identify, and triggering the display time when singing high tide is identified.
10. The apparatus of claim 9, wherein the apparatus further comprises an interaction unit configured to:
In response to detecting that the avatar of the participating audience member in the bullet screen information is clicked, outputting operation options for interacting with the clicked participating audience member.
11. The apparatus of claim 9, wherein the permutation unit is further configured to:
obtaining the geographic positions of the number of participating audience members;
Calculating the position weight of each participating audience member according to the geographic positions and live broadcast live positions of the plurality of participating audience members;
For each participating audience member, the position of the head portrait of the participating audience member in the barrage information is distributed according to the position weight of the participating audience member, so that the position of the head portrait is closer to the center of the barrage information when the position weight is larger.
12. The apparatus of claim 9, wherein the permutation unit is further configured to:
Acquiring interaction records among the plurality of participated audiences;
Calculating the relation weight of each participated audience according to the interaction record;
For each participating audience member, the position of the head portrait of the participating audience member in the barrage information is distributed according to the relation weight of the participating audience member, so that the position of the head portrait is closer to the center of the barrage information when the relation weight is larger.
13. The apparatus of claim 12, wherein the permutation unit is further configured to:
The head portraits of the participating audience members with the interactive records are distributed at adjacent positions.
14. The apparatus of claim 9, wherein the selection unit is further configured to:
Issuing a text of the interactive activity;
Receiving an interactive request of at least one audience, wherein the interactive request comprises a geographic location and any one of the following: a picture or video;
selecting the number of participating audience members according to the sequence of receiving the interaction request;
taking pictures or videos in the interaction request of the participating audience as the head portraits of the participating audience.
15. The apparatus of claim 10, wherein the operational options comprise at least one of: viewing, paying attention, praise, private chat, collection, wheat linking, group photo and intimacy display.
16. The apparatus of claim 9, wherein the output unit is further configured to:
outputting adjacent relation information to adjacent audiences if the adjacent audiences with geographic position distances smaller than a first preset value exist in the plurality of participated audiences;
And if the number of interactive audiences with the interaction times larger than a second preset value exist in the plurality of participated audiences, outputting interaction relation information to the interactive audiences.
17. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202210079001.4A 2022-01-24 2022-01-24 Method and device for outputting information Active CN114501050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210079001.4A CN114501050B (en) 2022-01-24 2022-01-24 Method and device for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210079001.4A CN114501050B (en) 2022-01-24 2022-01-24 Method and device for outputting information

Publications (2)

Publication Number Publication Date
CN114501050A CN114501050A (en) 2022-05-13
CN114501050B true CN114501050B (en) 2024-04-19

Family

ID=81474201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210079001.4A Active CN114501050B (en) 2022-01-24 2022-01-24 Method and device for outputting information

Country Status (1)

Country Link
CN (1) CN114501050B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117232A (en) * 2015-09-08 2015-12-02 北京乐动卓越科技有限公司 Method and system for generating text image on user interface
CN105307044A (en) * 2015-10-29 2016-02-03 天脉聚源(北京)科技有限公司 Method and apparatus for displaying interaction information on video program
WO2016165566A1 (en) * 2015-04-13 2016-10-20 腾讯科技(深圳)有限公司 Barrage posting method and mobile terminal
CN106056653A (en) * 2016-06-14 2016-10-26 无锡天脉聚源传媒科技有限公司 Method and device for generating animation effect of interaction activity
CN107315555A (en) * 2016-04-27 2017-11-03 腾讯科技(北京)有限公司 Register method for information display and device
CN110913264A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Live data processing method and device, electronic equipment and storage medium
CN112188275A (en) * 2020-09-21 2021-01-05 北京字节跳动网络技术有限公司 Bullet screen generation method, bullet screen generation device, bullet screen generation equipment and storage medium
WO2021068652A1 (en) * 2019-10-10 2021-04-15 北京字节跳动网络技术有限公司 Method and apparatus for displaying animated object, electronic device, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165566A1 (en) * 2015-04-13 2016-10-20 腾讯科技(深圳)有限公司 Barrage posting method and mobile terminal
CN105117232A (en) * 2015-09-08 2015-12-02 北京乐动卓越科技有限公司 Method and system for generating text image on user interface
CN105307044A (en) * 2015-10-29 2016-02-03 天脉聚源(北京)科技有限公司 Method and apparatus for displaying interaction information on video program
CN107315555A (en) * 2016-04-27 2017-11-03 腾讯科技(北京)有限公司 Register method for information display and device
CN106056653A (en) * 2016-06-14 2016-10-26 无锡天脉聚源传媒科技有限公司 Method and device for generating animation effect of interaction activity
WO2021068652A1 (en) * 2019-10-10 2021-04-15 北京字节跳动网络技术有限公司 Method and apparatus for displaying animated object, electronic device, and storage medium
CN110913264A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Live data processing method and device, electronic equipment and storage medium
CN112188275A (en) * 2020-09-21 2021-01-05 北京字节跳动网络技术有限公司 Bullet screen generation method, bullet screen generation device, bullet screen generation equipment and storage medium

Also Published As

Publication number Publication date
CN114501050A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
US12028302B2 (en) Assistance during audio and video calls
US11247134B2 (en) Message push method and apparatus, device, and storage medium
EP3889912B1 (en) Method and apparatus for generating video
JP3172870U (en) System for providing and managing interactive services
US12021643B2 (en) Outputting emotes based on audience member expressions in large-scale electronic presentation
CN110570698A (en) Online teaching control method and device, storage medium and terminal
US12022136B2 (en) Techniques for providing interactive interfaces for live streaming events
WO2018068557A1 (en) Service object processing method, server, terminal and system
CN110795004B (en) Social method and device
WO2023279937A1 (en) Interaction method and apparatus based on live-streaming video, and device and storage medium
WO2022147221A1 (en) System and process for collaborative digital content generation, publication, distribution, and discovery
US20230079785A1 (en) Video clipping method and apparatus, computer device, and storage medium
CN112528052A (en) Multimedia content output method, device, electronic equipment and storage medium
CN114501103B (en) Live video-based interaction method, device, equipment and storage medium
JP5905685B2 (en) Communication system and server
CN113382269A (en) Live interview implementation method, system, equipment and storage medium
CN114501050B (en) Method and device for outputting information
CN113282770A (en) Multimedia recommendation system and method
JP6367748B2 (en) Recognition device, video content presentation system
CN114449301B (en) Item sending method, item sending device, electronic equipment and computer-readable storage medium
US20190386840A1 (en) Collaboration systems with automatic command implementation capabilities
CN113515336A (en) Live broadcast room joining method, live broadcast room creating method, live broadcast room joining device, live broadcast room creating device, live broadcast room equipment and storage medium
US20230105417A1 (en) Bullet-screen comment processing method and apparatus
CN115379250B (en) Video processing method, device, computer equipment and storage medium
KR20230078204A (en) Method for providing a service of metaverse based on based on hallyu contents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant