CN114501050A - Method and apparatus for outputting information - Google Patents

Method and apparatus for outputting information Download PDF

Info

Publication number
CN114501050A
CN114501050A CN202210079001.4A CN202210079001A CN114501050A CN 114501050 A CN114501050 A CN 114501050A CN 202210079001 A CN202210079001 A CN 202210079001A CN 114501050 A CN114501050 A CN 114501050A
Authority
CN
China
Prior art keywords
participating
audience
participating audience
audiences
bullet screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210079001.4A
Other languages
Chinese (zh)
Other versions
CN114501050B (en
Inventor
冼钊铭
马饮泉
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202210079001.4A priority Critical patent/CN114501050B/en
Publication of CN114501050A publication Critical patent/CN114501050A/en
Application granted granted Critical
Publication of CN114501050B publication Critical patent/CN114501050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a method and a device for outputting information, and relates to the field of artificial intelligence, in particular to the field of live broadcast. The specific implementation scheme is as follows: calculating the number of pixels occupied by the text to be output; selecting a participating audience from the audience pool based on the amount; arranging the head portraits of each participated audience according to the stroke shapes of the texts to generate barrage information; and responding to the detected display opportunity, and outputting the bullet screen information. The implementation mode can promote the change of the audience from off-line to on-line, and simultaneously still can feel the interaction between the audiences on line, deepen the understanding and increase the whole atmosphere in the live broadcast room.

Description

Method and apparatus for outputting information
Technical Field
The present disclosure relates to the field of artificial intelligence, in particular to the field of live broadcasting, and more particularly to a method and apparatus for outputting information.
Background
At present, various singing meetings or performances liked by people are converted into an online live broadcasting mode, people can watch the singing meetings at home and perform real-time online interaction (such as gift sending, barrage speaking, pink silk ball and the like) with the stars, and the performances of the stars are appreciated. However, people watch live broadcast atmosphere online, and compared with off-line watching, the live broadcast atmosphere on-line watching still lacks effects such as interaction, shouting, atmosphere creation and the like among a plurality of on-site audiences.
Disclosure of Invention
The present disclosure provides a method, apparatus, device, storage medium, and computer program product for outputting information.
According to a first aspect of the present disclosure, there is provided a method for outputting information, comprising: calculating the number of pixels occupied by the text to be output; selecting a participating audience from the audience pool based on the amount; arranging the head portrait of each participating audience according to the stroke shape of the text to generate bullet screen information; and responding to the detected display opportunity, and outputting the bullet screen information.
According to a second aspect of the present disclosure, there is provided an apparatus for outputting information, comprising: a calculation unit configured to calculate the number of pixels occupied by text to be output; a selection unit configured to select a participating audience from the audience set according to the number; the arranging unit is configured to arrange the head portrait of each participating audience according to the stroke shape of the text to generate bullet screen information; an output unit configured to output the bullet screen information in response to detecting a presentation timing.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the first aspect.
According to the method and the device for outputting the information, provided by the embodiment of the disclosure, the bullet screen information is generated through the head portrait of the audience, the fluorescent card lifting in offline activities is simulated in the live broadcast room, the atmosphere creation of the online live broadcast room is improved, the intimacy among fans is brought closer, and the fans are provided with a showing opportunity.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram for one embodiment of a method for outputting information, according to the present disclosure;
FIG. 3 is a flow diagram of yet another embodiment of a method for outputting information according to the present disclosure;
4a, 4b are schematic diagrams of an application scenario of a method for outputting information according to the present disclosure;
FIG. 5 is a schematic block diagram illustrating one embodiment of an apparatus for outputting information according to the present disclosure;
FIG. 6 is a schematic block diagram of a computer system suitable for use with an electronic device implementing embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the disclosed method for outputting information or apparatus for outputting information may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a live application, a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, 103.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting video playing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background live server providing support for live rooms displayed on the terminal devices 101, 102, 103. The background live broadcast server can analyze and process the received data such as the interaction request and feed back the processing result (such as barrage information) to the terminal equipment.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein. The server may also be a server of a distributed system, or a server incorporating a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should be noted that the method for outputting information provided by the embodiment of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for outputting information is generally disposed in the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for outputting information in accordance with the present disclosure is shown. The method for outputting information comprises the following steps:
step 201, calculating the number of pixels occupied by the text to be output.
In this embodiment, an executing agent (e.g., the server shown in fig. 1) of the method for outputting information may issue text for interaction, for example, a name of a star, or some slogan. As shown in fig. 4a, sampling calculation can be performed according to a text such as a [ mountain and river ] character sample, and the number of lighted pixel points actually required by the [ mountain and river ] character sample is calculated according to a rule that 16 × 16 pixels can display one Chinese character or 2 letters. The mountain river requires 53 pixels to be displayed completely (as shown by the dots in fig. 4a except the middle M).
Step 202, selecting a participating audience from the audience pool based on the amount.
In this embodiment, the calculated pixel points are the lighting number of the barrage information of the simulation scene [ fluorescent raising ] and the number of the audience who can participate in [ fluorescent raising ]. The audience can apply for the interaction activity, can use the default head portrait, and can upload the head portrait again (the head portrait is the qualified image detected, and accords with the national regulation). Spectators can click a fluorescence title application adding button in a live broadcast room, call a camera or an album, select pictures or videos in 3s and upload the pictures or videos as head portraits. After successful uploading, the live broadcast room prompts audiences (for star interaction and playing cards together).
If the number of the pixel points needed by the audience applying for the interactive activity exceeds the number of the pixel points needed, the audience can be selected. Enough participating audiences can be randomly selected, and the participating audiences can be selected according to the application sequence.
If the participating audience member does not reach the desired number of pixels, the method continues to wait for the new audience member to join until the participating audience member reaches the desired number of pixels, and then step 203 is executed. Some audience members may also be randomly selected for the full amount.
And step 203, arranging the head portrait of each participating audience according to the stroke shape of the text to generate bullet screen information.
In this embodiment, each avatar represents a pixel point for composing the strokes of the text, as shown in fig. 4 a. The head portraits can be randomly selected to be arranged, and can also be arranged according to a certain rule. For example, the arrangement according to the color, brightness and the like of the head portrait enables darker pixel points to be dispersed at different positions, and a certain stroke cannot be unclear because of concentration.
And step 204, responding to the detected display opportunity, and outputting bullet screen information.
In this embodiment, when the number of viewers applying for the highlighted word is up to the requirement, there are several strategies for the timing of the card lift:
pictures or sounds extracted from live video stream pictures are identified, and then when singing high tide, the display time is triggered [ fluorescence raising ] is triggered. At this time, all audiences in the whole live broadcast room can see that the characters of the Chinese characters are lightened, and a plurality of audiences just applying for listing head portraits are arranged on the characters.
The target scene can be detected from pictures extracted from the live video stream pictures through an image recognition technology as a presentation opportunity, for example, a crane is lifted. Keywords may also be detected from the extracted sounds by speech recognition techniques, and a presentation triggered when a keyword is detected. The keywords can be set according to the lyrics in advance, and the lyrics that the audience commonly sing together in a large chorus when singing in a live meeting can be used as the keywords.
The field worker triggers the timing of the card lifting: under the scene, the scene is suitable for a plurality of performers to perform on the same station, and then with the turning of the shot, the staff triggers the exhibition opportunity of the fluorescence lifting of the corresponding performer.
The lighted text may be output by means of bullet screen information.
The method provided by the embodiment of the disclosure can improve atmosphere creation of the on-line singing meeting live broadcast room, bring up the sense of intimacy among fans, and enable the fans to obtain a showing opportunity.
In some optional implementations of this embodiment, generating the barrage information by arranging the avatar of each participating audience in a stroke shape of the text includes: obtaining the geographic locations of the number of participating audience members; calculating the position weight of each participating audience according to the geographic positions and the live broadcast site positions of the participating audiences; and for each participating audience, distributing the position of the head portrait of the participating audience in the bullet screen information according to the position weight of the participating audience, so that the position of the head portrait is closer to the center of the bullet screen information when the position weight is larger.
The server can calculate the straight-line distance between the geographic position of the application participation audience and Shenzhen XXX company according to the geographic position information of the audience in each application [ fluorescence domino ] (user starting position information authorization or acquisition according to the network IP, base station and the like of the user at that time), and then according to the position of the star at the current live broadcast site as a central point (such as Shenzhen XX company, M point in FIG. 4 a). And then quantized to a position weight of [0,1], which is the position distance parameter value between the viewer and the star, and then used to display the actual position of the viewer in the lighted font, as shown in fig. 4a, viewer B is closer to the live scene than a. The greater the position weight, the closer the position of the avatar is to the center of the bullet screen information. The arrangement mode can shorten the distance between the audience and the scene, so that the audience has more participation and the user experience is improved.
In some optional implementations of this embodiment, generating the barrage information by arranging the avatar of each participating audience in a stroke shape of the text includes: acquiring interaction records among the number of participating audiences; calculating the relation weight of each participating audience according to the interaction records; and for each participating audience, distributing the position of the head portrait of the participating audience in the bullet screen information according to the relation weight of the participating audience, so that the position of the head portrait is closer to the center of the bullet screen information when the relation weight is larger.
The audiences applying for the fluorescent card-lifting may have knowledge or interaction, for example, the xiaoming and the xiaojie are relations of mutual friends, or mutual attention, one-way attention, private chat records, group chat records, etc., and they simultaneously apply for the fluorescent card-lifting, then the server will calculate with all the applicants according to each applicant (for example, accumulate all interaction times), and finally quantize into a relation weight value of [0,1] for later displaying the actual position of the applicant in the lighted font. The more interaction with other viewers, the higher the relationship weight of the viewer. The greater the relationship weight, the closer the position of the avatar is to the center of the bullet screen information. The reward that the audience who actively interacts gets is that the head portrait is close to the center of the bullet screen information, so that the interaction among the audiences can be promoted, and the enthusiasm of the interaction among the audiences is improved.
Alternatively, a weighted sum of the location weight and the relationship weight may be calculated as the total weight. Arranging the head portraits from the center of the bullet screen information in a near-to-far mode according to the sequence of the total weight from large to small. The arrangement mode not only considers the positions of audiences, but also considers the interactivity of the audiences, on one hand, the emotion between fans is drawn, and on the other hand, the atmosphere improvement of the concert is also improved.
In some optional implementations of this embodiment, generating the barrage information by arranging the avatar of each participating audience in a stroke shape of the text includes: the avatars of the participating audience members having interactive recordings are distributed in adjacent locations. As shown in fig. 4a, viewers C and D have previously interacted to distribute their avatars in adjacent positions. Therefore, the closeness among the vermicelli can be enhanced, the interaction effect is improved, and the audiences are personally on the scene and enjoy the on-site effect.
In some optional implementations of this embodiment, selecting the participating audience from the audience member according to the amount includes: publishing the text of the interactive activity; receiving an interaction request of at least one viewer, wherein the interaction request comprises a geographic location and any one of: a picture or video; selecting the number of participating audience according to the sequence of the received interaction requests; and taking the picture or the video in the interactive request of the participating audience as the head portrait of the participating audience. The server issues text content capable of generating barrage information in advance, and audiences can register to participate in interaction after seeing the text content. And clicking the participation option in the live broadcast room, popping up a window for uploading the picture or the video, and confirming the submission after the picture or the video is uploaded by the audience. The viewer can also manually submit the geographic location, or the authorization server can acquire the geographic location through GPS positioning, IP address and the like. And the live broadcast software generates an interaction request according to the picture or the video and the geographic position and sends the interaction request to the server. And the server selects the number of participating audiences according to the sequence of the received interaction requests. Pictures or videos submitted by the participating audience members are used as avatars of the participating audience members. The method prevents the head portrait of the audience from missing or being too dark, so that the generated bullet screen information is not vivid. In addition, dynamic barrage information can be generated by submitting the short video, and the video uploaded by each audience is automatically played when the barrage is popped up.
With further reference to fig. 3, a flow 300 of yet another embodiment of a method for outputting information is shown. The process 300 of the method for outputting information includes the steps of:
step 301, calculating the number of pixels occupied by the text to be output.
Step 302, selecting a participating audience from the audience pool based on the amount.
Step 303, arranging the head portrait of each participating audience according to the stroke shape of the text to generate barrage information.
And 304, responding to the detected display opportunity, and outputting bullet screen information.
The steps 301-304 are substantially the same as the steps 201-204, and therefore will not be described again.
And 305, responding to the detected head portrait of the participating audience in the bullet screen information being clicked, and outputting operation options interacted with the clicked participating audience.
In this embodiment, in the process of [ fluorescence raising ] presentation, all spectators can click on the head portrait of any participant [ fluorescence raising ] and output operation options interacting with the clicked participant spectators. The clicker selects an action from the action options to interact with the clicked participant audience.
Therefore, the connection among the bean vermicelli can be strengthened, the cohesive force among the bean vermicelli is improved, and the atmosphere of the direct seeding room is hotter.
In some optional implementations of this embodiment, the operation options include at least one of: viewing, paying attention to, praise, private chat, collection, wheat connecting, group photo and intimacy display. The interaction between the audiences is strengthened through above-mentioned multiple mode, promotes to watch the experience in all aspects, can experience online performance in immersive ground, obtains the experience the same with the scene.
In some optional implementations of this embodiment, the method further includes: if adjacent audiences with geographic position distances smaller than a first preset value exist in the participating audiences, outputting adjacent relation information to the adjacent audiences; and if the number of the interactive audiences with the interaction times larger than a second preset value exists in the number of the participating audiences, outputting interactive relationship information to the interactive audiences. If there are other participating spectators that are in close contact with the current spectator, or that are geographically close together, the spectator is also prompted to record XX with you [ fluorescent voting ]. The method is convenient for audiences to find other audiences closely related to the audiences, and the live broadcast atmosphere can be enhanced by interaction.
With continuing reference to fig. 4a, 4b, fig. 4a, 4b are schematic diagrams of application scenarios of the method for outputting information according to the present embodiment. In the application scenario of fig. 4a, 4b, the server publishes the [ fluorescence domino ] content to be generated as "mountain river" and calculates that 53 participating spectators are required. The spectator who wants to participate in the card-holding activity submits the interaction request, and the server judges that 53 people are not available by the current time, allows the spectator to participate in the round of the activity, and if 53 people are available, informs the spectator to participate in the next round of the activity in the next morning. Viewers who are allowed to participate in the event may upload images or video as avatars. The server may also obtain the geographic locations and interaction records of the participating audience members and then calculate the location weight and relationship weight for each participating audience member. Some viewers may not upload the image or video and may be compensated by others. After the head portraits of 53 participating audiences are collected, the head portraits are arranged to generate bullet screen information in the word of 'mountain and river'. After detecting that XXX appears in the lyrics of live broadcasting sound or the exhibition time of playing the brand triggered by field staff, displaying the words in a bullet screen mode in a live broadcasting room [ fluorescent playing the brand ]. Any audience (which can be non-participating audience) can click any head portrait in the fluorescent holding plate and then interact with viewing, attention, praise, private chat, collection, wheat connection, group photo, intimacy display and the like.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for outputting information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable in various electronic devices.
As shown in fig. 5, the apparatus 500 for outputting information of the present embodiment includes: a calculation unit 501, a selection unit 502, an arrangement unit 503, and an output unit 504. Wherein, the calculating unit 501 is configured to calculate the number of pixels occupied by the text to be output; a selecting unit 502 configured to select a participating audience from the audience set according to the amount; an arranging unit 503 configured to arrange the head portrait of each participating audience in the stroke shape of the text to generate bullet screen information; an output unit 504 configured to output the bullet screen information in response to detecting the presentation timing.
In the present embodiment, specific processing of the receiving unit calculating unit 501, the selecting unit 502, the arranging unit 503 and the outputting unit 504 of the apparatus 500 for outputting information may refer to step 201, step 202, step 203, step 204 in the corresponding embodiment of fig. 2.
In some optional implementations of this embodiment, the apparatus 500 further comprises an interaction unit (not shown in the drawings) configured to: and in response to detecting that the head portrait of the participating audience in the bullet screen information is clicked, outputting operation options interacted with the clicked participating audience.
In some optional implementations of this embodiment, the arranging unit 503 is further configured to: obtaining the geographic locations of the number of participating audience members; calculating the position weight of each participating audience according to the geographic positions and the live broadcast site positions of the participating audiences; and for each participating audience, distributing the position of the head portrait of the participating audience in the bullet screen information according to the position weight of the participating audience, so that the position of the head portrait is closer to the center of the bullet screen information when the position weight is larger.
In some optional implementations of this embodiment, the arranging unit 503 is further configured to: acquiring interaction records among the number of participating audiences; calculating the relation weight of each participating audience according to the interaction records; and for each participating audience, distributing the position of the head portrait of the participating audience in the bullet screen information according to the relation weight of the participating audience, so that the position of the head portrait is closer to the center of the bullet screen information when the relation weight is larger.
In some optional implementations of this embodiment, the arranging unit 503 is further configured to: the avatars of the participating audience members having interactive recordings are distributed in adjacent locations.
In some optional implementations of the present embodiment, the selecting unit 502 is further configured to: publishing the text of the interactive activity; receiving an interaction request of at least one viewer, wherein the interaction request comprises a geographic location and any one of: a picture or video; selecting the number of participating audience according to the sequence of the received interaction requests; and taking the picture or the video in the interactive request of the participating audience as the head portrait of the participating audience.
In some optional implementations of this embodiment, the operation options include at least one of: viewing, paying attention to, praise, private chat, collection, wheat connecting, group photo and intimacy display.
In some optional implementations of this embodiment, the output unit 504 is further configured to: if the adjacent audiences with the geographical position distances smaller than a first preset value exist in the participating audiences, outputting adjacent relation information to the adjacent audiences; and if the number of the interactive audiences with the interaction times larger than a second preset value exists in the number of the participating audiences, outputting interactive relationship information to the interactive audiences.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of flows 200 or 300.
A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of flows 200 or 300.
A computer program product comprising a computer program which, when executed by a processor, implements the method of flow 200 or 300.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a method for outputting information. For example, in some embodiments, the method for outputting information may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM603 and executed by the computing unit 601, one or more steps of the method for outputting information described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured by any other suitable means (e.g., by means of firmware) to perform the method for outputting information.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (19)

1. A method for outputting information, comprising:
calculating the number of pixels occupied by the text to be output;
selecting a participating audience from the audience pool based on the amount;
arranging the head portrait of each participating audience according to the stroke shape of the text to generate bullet screen information;
and responding to the detected display opportunity, and outputting the bullet screen information.
2. The method of claim 1, wherein the method further comprises:
and in response to detecting that the head portrait of the participating audience in the bullet screen information is clicked, outputting operation options interacted with the clicked participating audience.
3. The method of claim 1, wherein said arranging the avatar of each participating audience in the stroke shape of the text to generate barrage information comprises:
obtaining the geographic locations of the number of participating audience members;
calculating the position weight of each participating audience according to the geographic positions and the live broadcast site positions of the participating audiences;
and for each participating audience, distributing the position of the head portrait of the participating audience in the bullet screen information according to the position weight of the participating audience, so that the position of the head portrait is closer to the center of the bullet screen information when the position weight is larger.
4. The method of claim 1, wherein said arranging the avatar of each participating audience in the stroke shape of the text to generate barrage information comprises:
acquiring interaction records among the number of participating audiences;
calculating the relation weight of each participating audience according to the interaction records;
and for each participating audience, distributing the position of the head portrait of the participating audience in the bullet screen information according to the relation weight of the participating audience, so that the position of the head portrait is closer to the center of the bullet screen information when the relation weight is larger.
5. The method of claim 4, wherein said arranging the avatar of each participating audience in the stroke shape of the text generates bullet screen information, comprising:
the avatars of the participating audience members having interactive recordings are distributed in adjacent locations.
6. The method of claim 1, wherein said selecting a participating audience from a set of audiences according to said amount comprises:
publishing the text of the interactive activity;
receiving an interaction request of at least one viewer, wherein the interaction request comprises a geographic location and any one of: a picture or video;
selecting the number of participating audience according to the sequence of the received interaction requests;
and taking the picture or the video in the interactive request of the participating audience as the head portrait of the participating audience.
7. The method of claim 2, wherein the operational options include at least one of: viewing, paying attention to, praise, private chat, collection, wheat connecting, group photo and intimacy display.
8. The method of claim 1, wherein the method further comprises:
if the adjacent audiences with the geographical position distances smaller than a first preset value exist in the participating audiences, outputting adjacent relation information to the adjacent audiences;
and if the number of the interactive audiences with the interaction times larger than a second preset value exists in the number of the participating audiences, outputting interactive relationship information to the interactive audiences.
9. An apparatus for outputting information, comprising:
a calculation unit configured to calculate the number of pixels occupied by text to be output;
a selection unit configured to select a participating audience from the audience set according to the number;
the arranging unit is configured to arrange the head portrait of each participating audience according to the stroke shape of the text to generate bullet screen information;
an output unit configured to output the bullet screen information in response to detecting a presentation timing.
10. The apparatus of claim 9, wherein the apparatus further comprises an interaction unit configured to:
and in response to detecting that the head portrait of the participating audience in the bullet screen information is clicked, outputting operation options interacted with the clicked participating audience.
11. The apparatus of claim 9, wherein the ranking unit is further configured to:
obtaining the geographic locations of the number of participating audience members;
calculating the position weight of each participating audience according to the geographic positions and the live broadcast site positions of the participating audiences;
and for each participating audience, distributing the position of the head portrait of the participating audience in the bullet screen information according to the position weight of the participating audience, so that the position of the head portrait is closer to the center of the bullet screen information when the position weight is larger.
12. The apparatus of claim 9, wherein the ranking unit is further configured to:
acquiring interaction records among the number of participating audiences;
calculating the relation weight of each participating audience according to the interaction records;
and for each participating audience, distributing the position of the head portrait of the participating audience in the bullet screen information according to the relation weight of the participating audience, so that the position of the head portrait is closer to the center of the bullet screen information when the relation weight is larger.
13. The apparatus of claim 12, wherein the ranking unit is further configured to:
the avatars of the participating audience members having interactive recordings are distributed in adjacent locations.
14. The apparatus of claim 9, wherein the selection unit is further configured to:
publishing the text of the interactive activity;
receiving an interaction request of at least one viewer, wherein the interaction request comprises a geographic location and any one of: a picture or video;
selecting the number of participating audience according to the sequence of the received interaction requests;
and taking the picture or the video in the interactive request of the participating audience as the head portrait of the participating audience.
15. The apparatus of claim 10, wherein the operational options include at least one of: viewing, paying attention to, praise, private chat, collection, wheat connecting, group photo and intimacy display.
16. The apparatus of claim 9, wherein the output unit is further configured to:
if the adjacent audiences with the geographical position distances smaller than a first preset value exist in the participating audiences, outputting adjacent relation information to the adjacent audiences;
and if the number of the interactive audiences with the interaction times larger than a second preset value exists in the number of the participating audiences, outputting interactive relationship information to the interactive audiences.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
CN202210079001.4A 2022-01-24 2022-01-24 Method and device for outputting information Active CN114501050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210079001.4A CN114501050B (en) 2022-01-24 2022-01-24 Method and device for outputting information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210079001.4A CN114501050B (en) 2022-01-24 2022-01-24 Method and device for outputting information

Publications (2)

Publication Number Publication Date
CN114501050A true CN114501050A (en) 2022-05-13
CN114501050B CN114501050B (en) 2024-04-19

Family

ID=81474201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210079001.4A Active CN114501050B (en) 2022-01-24 2022-01-24 Method and device for outputting information

Country Status (1)

Country Link
CN (1) CN114501050B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105117232A (en) * 2015-09-08 2015-12-02 北京乐动卓越科技有限公司 Method and system for generating text image on user interface
CN105307044A (en) * 2015-10-29 2016-02-03 天脉聚源(北京)科技有限公司 Method and apparatus for displaying interaction information on video program
WO2016165566A1 (en) * 2015-04-13 2016-10-20 腾讯科技(深圳)有限公司 Barrage posting method and mobile terminal
CN106056653A (en) * 2016-06-14 2016-10-26 无锡天脉聚源传媒科技有限公司 Method and device for generating animation effect of interaction activity
CN107315555A (en) * 2016-04-27 2017-11-03 腾讯科技(北京)有限公司 Register method for information display and device
CN110913264A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Live data processing method and device, electronic equipment and storage medium
CN112188275A (en) * 2020-09-21 2021-01-05 北京字节跳动网络技术有限公司 Bullet screen generation method, bullet screen generation device, bullet screen generation equipment and storage medium
WO2021068652A1 (en) * 2019-10-10 2021-04-15 北京字节跳动网络技术有限公司 Method and apparatus for displaying animated object, electronic device, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165566A1 (en) * 2015-04-13 2016-10-20 腾讯科技(深圳)有限公司 Barrage posting method and mobile terminal
CN105117232A (en) * 2015-09-08 2015-12-02 北京乐动卓越科技有限公司 Method and system for generating text image on user interface
CN105307044A (en) * 2015-10-29 2016-02-03 天脉聚源(北京)科技有限公司 Method and apparatus for displaying interaction information on video program
CN107315555A (en) * 2016-04-27 2017-11-03 腾讯科技(北京)有限公司 Register method for information display and device
CN106056653A (en) * 2016-06-14 2016-10-26 无锡天脉聚源传媒科技有限公司 Method and device for generating animation effect of interaction activity
WO2021068652A1 (en) * 2019-10-10 2021-04-15 北京字节跳动网络技术有限公司 Method and apparatus for displaying animated object, electronic device, and storage medium
CN110913264A (en) * 2019-11-29 2020-03-24 北京达佳互联信息技术有限公司 Live data processing method and device, electronic equipment and storage medium
CN112188275A (en) * 2020-09-21 2021-01-05 北京字节跳动网络技术有限公司 Bullet screen generation method, bullet screen generation device, bullet screen generation equipment and storage medium

Also Published As

Publication number Publication date
CN114501050B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN110570698B (en) Online teaching control method and device, storage medium and terminal
JP3172870U (en) System for providing and managing interactive services
US11247134B2 (en) Message push method and apparatus, device, and storage medium
US10313296B2 (en) Plug-in for extending functionality of messenger application across supplemented and unsupplemented application instances
US20120257112A1 (en) System for Combining Video Data Streams into a Composite Video Data Stream
US10389766B2 (en) Method and system for information sharing
US20110244954A1 (en) Online social media game
US12022136B2 (en) Techniques for providing interactive interfaces for live streaming events
US10861061B2 (en) Messenger application plug-in for providing tailored advertisements within a conversation thread
US20130326373A1 (en) System and Method for Displaying Social Network Interactivity with a Media Event
CA3155236C (en) Interactive and personalized ticket recommendation
CN112511849A (en) Game display method, device, equipment, system and storage medium
WO2022195352A1 (en) Systems and methods for generating and using place-based social networks
US9853924B2 (en) Providing access to location-specific services within a messenger application conversation thread
US8028233B1 (en) Interactive graphical interface including a streaming media component and method and system of producing the same
CN113282770A (en) Multimedia recommendation system and method
CN114501050B (en) Method and device for outputting information
WO2023024803A1 (en) Dynamic cover generating method and apparatus, electronic device, medium, and program product
CN114449301B (en) Item sending method, item sending device, electronic equipment and computer-readable storage medium
CN111885139A (en) Content sharing method, device and system, mobile terminal and server
US12075101B2 (en) Bullet-screen comment processing method and apparatus
KR102471180B1 (en) A system for providing augmented reality content service and a method of providing the service
CN117939179A (en) Live broadcast interaction method, device, equipment and storage medium
CN115904159A (en) Display method and device in virtual scene, client device and storage medium
CN115150672A (en) Live broadcast interaction method and device for simulated workplace and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant