Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application shall fall within the scope of protection of the present application.
The application provides a bullet screen display method which can be applied to system architectures of a client and a server. The server may refer to a video playing website or a background server of a video live broadcast platform. The video playing website can be, for example, an arcade art, a fox searching video, an Acfun and the like; the video live broadcast platform can be, for example, a fighting fish, tiger teeth, battle flags and the like. The number of servers is not particularly limited in the present embodiment. The server may be one server, several servers, or a server cluster formed by several servers.
The client may be an electronic device having a network communication function, a data processing function, and an image display function. The electronic device may be, for example, a desktop computer, a tablet computer, a notebook computer, a smart phone, a digital assistant, a smart wearable device, a shopping guide terminal, a smart television, and the like. Of course, the client may also be software running in the above-mentioned electronic device. The software may be software having a video production function or a video playback function. For example, the software may be an Application (APP) installed on a smartphone.
Referring to fig. 1, the bullet screen display method provided by the present application may include the following steps. The execution subject of the following steps may be the client described above.
S1: when a target bullet screen currently displayed in a video is selected, whether a specified action applied to the target bullet screen exists or not is detected.
In the present embodiment, the bullet screen displayed in the video is different from the bullet screen in the related art. In the prior art, the bullet screen data and the video data are usually integrated into the same video stream data after the time axis calibration. When the video stream data is played, the bullet screen can be displayed on the current interface of the video as a part of the video. Since the integration of the bullet screen data with the video data forms a new video stream, the bullet screen displayed in the current interface of the video is generally not selectable.
However, in the present embodiment, the bullet screen data and the video data may be separated and not integrated. In particular, when loading a video, it is common to build a corresponding container (container) in the page, which is usually not visible, according to the size that the video needs to occupy. After the container is established, the container may be filled with visual content of the video. Thus, the user can view the corresponding video in the page.
In this embodiment, the container may be built, a floating window may be built in the container, and the barrage content may be displayed in the built floating window. In particular, the floating window may be considered to be another container nested within the container, except that the properties of the floating window may be arranged to be one layer above the container. Therefore, the video content displayed in the container can not form a shield to the bullet screen content displayed in the floating window. Therefore, the bullet screen in the floating window can be selected by the user, and simultaneously the video content being played cannot be influenced.
In this embodiment, the manner in which the user selects the bullet screen may be different according to the different client forms. For example, when the client is an electronic device with a touch screen, the manner in which the user selects the bullet screen may be to press the bullet screen with a finger. For another example, when the client is an electronic device with an external input device such as a mouse, a keyboard, a touch pen, etc., the manner of selecting the bullet screen by the user may be to select the bullet screen through the external input device. For example, one of the barrages can be clicked by a mouse or a touch pen, and the other barrage can be selected by a direction key of a keyboard.
In this embodiment, an example of any target barrage in the current interface of the video is described, and the target barrage may be a barrage for which a user wants to copy the content. When the target bullet screen is selected by the user, the target bullet screen can be fixed at the position when selected. The video and the rest unselected bullet screens can be played normally. At this time, the client may detect whether there is a specified action subsequently applied to the target bullet screen.
In this embodiment, the specifying operation may have various forms. Specifically, referring to fig. 2, in an embodiment of the present application, when the target barrage is selected, a barrage sending area may be displayed at a specified position in the current interface of the video. The bullet screen delivery area may be a predefined shape. For example, in fig. 2, the bullet screen sending area may be rectangular. In this embodiment, the bullet screen sending area may further display characters for prompting a user to perform an operation. For example, in fig. 2, "drag a bullet screen to be copied to this" may be displayed in the bullet screen sending area. In this way, the specified action applied to the target bullet screen may be dragging the target bullet screen from the selected position to the position of the bullet screen sending area, so that the target bullet screen is located in the bullet screen sending area; and when the target bullet screen is positioned in the bullet screen sending area, releasing the target bullet screen. Taking a smart phone with a touch screen as an example, a user can touch one bullet screen in the current interface of the video through a finger, and at this time, the touched bullet screen stops moving. Meanwhile, a rectangular area with the word "drag the bullet screen to be copied to this" may appear below the current interface of the video. Therefore, the user can drag the bullet screen by using a finger, and when the bullet screen is positioned in the rectangular area, the user can leave the finger from the touch screen.
In this embodiment, the target bullet screen may be located in the bullet screen sending area in various ways. Specifically, when the area occupied by the target bullet screen is smaller than the bullet screen sending area, the target bullet screen being located in the bullet screen sending area may mean that all the content of the target bullet screen is located in the bullet screen sending area, specifically as shown in fig. 3 (a). In addition, when the characters quantity that contains in the target bullet curtain is more, shared region ratio when bullet curtain sends the region greatly, the target bullet curtain is located bullet curtain sends the region in can indicate target bullet curtain with the intersection between the bullet curtain sending region is in the occupied proportion is greater than or equal to appointed threshold value in the bullet curtain sending region. Specifically, as shown in fig. 3(b), the intersection between the target bullet screen and the bullet screen sending area may be a shaded area. When the proportion of the shadow area in the bullet screen sending area is greater than or equal to 70%, the target bullet screen can be considered to be in the bullet screen sending area. The designated threshold may be a value that flexibly changes according to actual conditions, and the designated threshold may be a reference value for determining whether the target bullet screen is in the bullet screen sending area. In addition, when the area occupied by the target bullet screen is the same as the bullet screen sending area, the target bullet screen being located in the bullet screen sending area may mean that the content of the target bullet screen, which is greater than or equal to a specified ratio, is located in the bullet screen sending area. Specifically, as shown in fig. 3(c), the hatched portion is a portion of the target bullet screen located in the bullet screen sending area. Then, when the ratio of the shadow portion in the target bullet screen is greater than or equal to a specified ratio, the target bullet screen is considered to be located in the bullet screen sending area. The designated ratio may also be a value that flexibly changes according to actual conditions, and the designated ratio may also be used as a reference value for determining whether the target bullet screen is in the bullet screen sending area.
In an embodiment of the application, after the bullet screen sending area is displayed, a user can also adjust the size of the bullet screen sending area according to own will. Specifically, the user can drag the frame of the bullet screen sending area through a finger, a mouse, a keyboard or the like, so as to change the size of the frame of the bullet screen sending area. In this embodiment, the user may input the size adjustment command of the client in real time by applying the drag motion to the bullet screen sending area. Therefore, after the client receives the size adjusting instruction, the size of the bullet screen sending area can be adjusted according to the size adjusting instruction.
In an embodiment of the application, when the length of the target bullet screen selected by the user is greater than the length of the bullet screen sending area, the user can select the target bullet screen through the bullet screen sending area, and the selected bullet screen content is used as the bullet screen which the user wants to publish. Specifically, after the user drags the target bullet screen and adjusts the target bullet screen to a proper position, the target bullet screen can be released. Therefore, the bullet screen content in the bullet screen sending area in the target bullet screen can be used as the bullet screen content which the user wants to publish. For example, in fig. 3(a), the bullet screen content that the user wants to publish may be "populus nice commander"; in fig. 3(b), the bullet screen content that the user wants to publish may be "rainy overnight, my love overflows". In this way, after determining the bullet screen content located in the bullet screen sending area from the target bullet screen, the client may send a bullet screen sending request including the bullet screen content to the server.
As can be seen from the above two embodiments, by adjusting the size of the bullet screen sending area and adjusting the overlapping relationship between the target bullet screen and the bullet screen sending area, the content that the user is interested in can be selected from the target bullet screen for publication.
In another embodiment of the present application, the specified action applied to the target bullet screen may be to drag the target bullet screen from the selected position of the target bullet screen, so that the distance that the target bullet screen moves in the current interface reaches a specified distance; and releasing the target bullet screen after the moving distance of the target bullet screen reaches the specified distance. In this embodiment, whether a specified action is applied to the target bullet screen may be determined by determining whether the target bullet screen is dragged by a specified distance. Specifically, referring to fig. 4, also taking a smart phone with a touch screen as an example, a user may drag the target barrage with a finger and move the target barrage in the current interface of the video. And in the direction away from the selected position, the resistance of the target bullet screen is larger along with the increase of the moving distance. When the target bullet screen moves, an upper limit distance that the target bullet screen can move may be set in advance. When the distance that the target bullet screen is dragged reaches the upper limit distance, the target bullet screen reaches the limit of movement and cannot move in the direction far away from the selected position. It can be seen that the upper limit distance will typically be greater than or equal to the specified distance. In this embodiment, the criterion for determining whether to apply the specified action to the target bullet screen may be whether the distance over which the target bullet screen is dragged is greater than or equal to the specified distance. And when the distance is greater than or equal to the specified distance, the target bullet screen is considered to have a specified action applied. At this time, once the user releases the target bullet screen, the target bullet screen can return to the selected position and move according to a preset path from the selected position. The preset path may be a trajectory along which the target bullet screen moves before being selected. Then the target bullet screen may continue to move along the previous trajectory when it returns to the selected position. It should be noted that, if the moving distance of the target bullet screen reaches the upper limit distance, at this time, even if the user does not release the target bullet screen, the target bullet screen may automatically return to the selected position, and continue to move according to the preset path. The specified distance and the upper limit distance may be values that can be flexibly changed according to actual situations, as long as the upper limit distance is ensured to be greater than or equal to the specified distance.
In another embodiment of the present application, it may be determined whether a specific action is performed on the target bullet screen by judging the degree of pressing force applied to the target bullet screen. Specifically, when the target bullet screen is selected, the degree of pressing force applied to the target bullet screen can be continuously detected. And when the degree of pressing force applied to the target bullet screen reaches a specified pressure threshold value, considering that a specified action is applied to the target bullet screen. In particular, the degree of pressing may be measured by a pressure sensitive sensor. The pressure sensitive sensor may be integrated on a touch screen or in an external input device. The specified pressure threshold value can be a value which can be flexibly changed according to actual conditions.
In another embodiment of the present application, when determining whether the designated action is performed or not by the pressing force degree, an additional determination step may be added. Specifically, a degree of pressing force applied to the target bullet screen may be detected first, and when the degree of pressing force reaches a specified pressure threshold, a selection tag associated with the target bullet screen is presented in a current interface of the video; the selection tag may be used to prompt the user to copy the contents of the target barrage. Referring to fig. 5, when the pressing force exerted on the target bullet screen reaches the specified pressure threshold, a selection label with a "copy" word may be popped up on the right side of the target bullet screen. Thus, when the selection tag is triggered, it can be considered that a specified action is exerted on the target bullet screen. The way the selection tag is triggered may be, for example, that the selection tag is clicked.
S3: and when the specified action exists, identifying the content of the target bullet screen.
In this embodiment, when it is determined that the designated action exists, it is considered that the user currently performs the operation of copying the target bullet screen, and at this time, the client may recognize the content of the target bullet screen.
In this embodiment, the client may recognize the text content in the target bullet screen by means of OCR (Optical Character Recognition). Specifically, the client may determine the shape of the font included in the target bullet screen by detecting the light and dark light in the target bullet screen. After the font shape is identified, the font shape is compared with each shape in the font library, so that the matched font in the font library can be obtained. In this way, the obtained fonts are combined in the order of recognition to be the content of the recognized target bullet screen. Specifically, in connection with the embodiment set forth in step S1, the timing of identifying the content of the target bullet screen may be as follows.
In one embodiment, referring to fig. 2, when the target barrage is dragged to the barrage sending area and the target barrage is released, the client may identify the content in the target barrage.
In one embodiment, referring to fig. 4, when the distance by which the target barrage is dragged is greater than or equal to a specified distance and the target barrage is released, the client may identify the content in the target barrage.
In one embodiment, after the target bullet screen is selected, when the degree of pressing force applied to the target bullet screen reaches a specified pressure threshold, the client may identify the content in the target bullet screen.
In one embodiment, referring to fig. 5, after the target bullet screen is selected, when the pressing force applied to the target bullet screen reaches a specified pressure threshold, a selection label may pop up beside the target bullet screen. When the selection tag is triggered by a user, the client can identify the content in the target bullet screen.
It should be noted that, in an actual application scenario, when the target bullet screen is selected, the client may detect whether a specified action applied to the target bullet screen occurs within a specified time period. If the specified action applied to the target bullet screen is detected within the specified time, the subsequent process of identifying the content of the target bullet screen can be continuously executed. However, if the specified action applied to the target bullet screen is not detected within the specified time length, the target bullet screen can automatically cancel the selected state, so that the target bullet screen moves according to the preset path from the position when the target bullet screen is selected. The preset path may be a running track of the target bullet screen before being selected. Therefore, after the target bullet screen is cancelled to be selected, the target bullet screen can continue to move along the previous running track. For example, the target barrage is translated from the right side to the left side of the video, and when selected by the user, the movement can be stopped. If the client does not detect the specified action applied to the target bullet screen within the specified time period, the target bullet screen can continue to translate along the direction from the right side to the left side.
S5: and displaying the copy bullet screen consistent with the content of the target bullet screen in the current interface of the video.
In this embodiment, after the content in the target bullet screen is identified, a copy bullet screen consistent with the content of the target bullet screen can be displayed in the current interface of the video.
Specifically, after the content of the target barrage is identified, a barrage sending request including the identified content may be sent to a server. The bullet screen sending request can be a character string written according to a preset rule. Wherein the preset rule may be a network communication protocol followed between the client and the server. For example, the bullet screen sending request may be a character string written according to the HTTP protocol. The preset rule may define each component in the bullet screen sending request and an arrangement order among the components. For example, the bullet screen sending request may include a request identification field, a source IP address field, and a destination IP address field. The request identification field may populate the identified content. The source IP address field may fill in the IP address of the client and the target IP address field may fill in the IP address of the server. In this way, the barrage sending request can be sent from the client to the server.
In this embodiment, after receiving the barrage transmission request, the server may extract the identified content therefrom. Generally, the barrage sending request may further include time information corresponding to the identified content. The time information may be system time when the client sends the bullet screen sending request, system time when the server receives the bullet screen sending request, system time when the user finishes executing the specified action, or video playing time corresponding to the user finishing executing the specified action. The time information may be used to determine the opportunity to present the replicated bullet screen in the client.
In this embodiment, the server may construct the bullet screen information based on the identified content after extracting the identified content. The body field of the bullet screen information can be filled with the identified content, and the bullet screen information can also be provided with a header file which can be normally propagated in the network and can be identified by the client. In this way, after the server constructs the bullet screen information, the bullet screen information can be fed back to the client. In this way, the client may receive the bullet screen information that the server sends a request feedback for the bullet screen, where the bullet screen information includes the identified content. It should be noted that, after the server feeds back the bullet screen information, the server may also store the bullet screen information locally, and associate the bullet screen information with the video. Therefore, when other clients request to load the video, the server can provide the bullet screen information to the client, and thus, a user using the client can view the copy bullet screen sent before.
In this embodiment, the client may extract the identified content from the bullet screen information, and display the identified content in a current interface of the video, so that a copy bullet screen that is consistent with the content of the target bullet screen can be displayed in the current interface.
In an embodiment of the application, after identifying the content in the target barrage, the client may also directly play the identified content locally as a copy barrage. In this way, the user can immediately see the bullet screen copied by the user on the current interface of the video. However, the other users watching the video at this time cannot see the copy bullet screen because they do not interact with the server.
In this embodiment, in order to enable a subsequent user to see the currently published copy barrage when watching the video, the client may send the identified content to a server for backup in the server. In this way, the server can associate the barrage information with the video. Therefore, when other clients request to load the video, the server can provide the bullet screen information to the client, so that other users can view the copy bullet screen sent before.
As can be seen from the above, after identifying the content in the target barrage, the client may automatically interact with the server, thereby implementing a display process of copying the barrage. The process does not need the user to participate, and the process of publishing the bullet screen by the user is greatly simplified, so that the speed of publishing the bullet screen by the user is improved.
Referring to fig. 6, in an embodiment of the present application, after the content of the target barrage is identified, in order to ensure the correctness of the identified content, the identified content may be filled into a barrage input box. Thus, the user can check whether the content of the bullet screen which is currently wanted to be published is correct. If the identified content is in error, the user may make the modification. In this way, when the sending key associated with the bullet screen input box is triggered, the identified content can be displayed in the current interface of the video. That is, after the user confirms that the content in the bullet screen input box is correct, the user can click the "send" button next to the bullet screen input box, so that the copied bullet screen can be published.
Referring to fig. 7, the present application further provides a client, which includes a processor 100 and a memory 200, where the memory 200 stores a computer program, and when the computer program is executed by the processor 100, the following steps may be implemented.
S1: when a target bullet screen currently displayed in a video is selected, detecting whether a specified action applied to the target bullet screen exists or not;
s3: when the specified action exists, identifying the content of the target bullet screen;
s5: and displaying the copy bullet screen consistent with the content of the target bullet screen in the current interface of the video.
The processor 100 may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth. The present application is not limited.
In this embodiment, the memory 200 may be a storage device for storing information. In a digital system, the device capable of storing binary data may be a memory; in an integrated circuit, a circuit without an actual form and with a storage function can also be a memory, such as a RAM, a FIFO and the like; in the system, the storage device in physical form may also be called a memory, such as a memory bank, a TF card, etc.
The specific functions of the client, the processor 100 and the memory 200 of the client disclosed in the above embodiments may be explained by comparing with the bullet screen display method embodiment in the present application, so as to implement the bullet screen display method embodiment in the present application and achieve the technical effects of the method embodiment.
As can be seen from the above, in the present application, the target barrage displayed in the current interface of the video may be selected by the user. After the target bullet screen is selected, whether the user applies a specified action on the target bullet screen or not can be detected. The specified action may be an action to effect copying of the content of the target bullet screen. When the specified action is detected, the client can automatically identify the content of the target bullet screen. When the corresponding text content is identified, the copied barrage consistent with the text content can be displayed in the current interface of the video, so that the barrage publishing process aiming at the same content is completed. Therefore, according to the technical scheme provided by the application, the user does not need to input the text information in the bullet screen input box, the specified action is completed on the target bullet screen, the content of the target bullet screen can be copied and published, and the speed of publishing the bullet screen by the user is improved.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate a dedicated integrated circuit chip 2. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most popular applications. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Those skilled in the art will also appreciate that, in addition to implementing the client as pure computer readable program code, the same functionality can be implemented entirely by logically programming method steps such that the client is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a client may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as an arrangement within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the client, reference may be made to the introduction of the embodiments of the method described above for a comparative explanation.
Although the present application has been described in terms of embodiments, those of ordinary skill in the art will recognize that there are numerous variations and permutations of the present application without departing from the spirit of the application, and it is intended that the appended claims encompass such variations and permutations without departing from the spirit of the application.