WO2021073478A1 - 弹幕信息识别方法、显示方法、服务器及电子设备 - Google Patents

弹幕信息识别方法、显示方法、服务器及电子设备 Download PDF

Info

Publication number
WO2021073478A1
WO2021073478A1 PCT/CN2020/120415 CN2020120415W WO2021073478A1 WO 2021073478 A1 WO2021073478 A1 WO 2021073478A1 CN 2020120415 W CN2020120415 W CN 2020120415W WO 2021073478 A1 WO2021073478 A1 WO 2021073478A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
barrage information
barrage
object information
Prior art date
Application number
PCT/CN2020/120415
Other languages
English (en)
French (fr)
Inventor
易园林
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2021073478A1 publication Critical patent/WO2021073478A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • the embodiments of the present application relate to the field of data processing technology, and in particular to a method for identifying barrage information, a display method, a server, and an electronic device.
  • the embodiments of the application provide a method for identifying barrage information, a display method, a server, and an electronic device, so as to solve the problem that related technologies cannot push barrage information that users are really interested in, and enrich and satisfy the user’s personalization of barrage information.
  • the question of demand The question of demand.
  • the embodiments of the present application provide a method for identifying barrage information.
  • the method is applied to a server, and the method includes:
  • the embodiments of the present application also provide a method for displaying barrage information.
  • the method is applied to an electronic device, and the method includes:
  • the target image area Recognize the target image area, obtain the recognition result, and send the recognition result to the server for the server to determine the second object information from the first object information included in the target video according to the recognition result , And identify target barrage information from the stored barrage information, where the target barrage information includes at least one of the second object information;
  • the embodiments of the present application also provide a server, including:
  • the first acquisition module is configured to acquire the recognition result determined after the electronic device recognizes the target image area, the target image area being determined by the electronic device according to the user's first input of the target video, and the target video is the target video. State the video currently playing on the electronic device;
  • a determining module configured to determine second object information from the first object information included in the target video according to the recognition result
  • An identification module configured to identify target barrage information from stored barrage information, where the target barrage information includes at least one of the second object information;
  • the sending module is used to send the target barrage information to the electronic device.
  • the embodiments of the present application also provide an electronic device, which is characterized in that it includes:
  • the first receiving module is configured to receive a user's first input to the target video played on the electronic device
  • a determining module configured to determine a target frame image of the target video in response to the first input, and determine a target image area on the target frame image
  • the recognition module is used to recognize the target image area, obtain the recognition result, and send the recognition result to the server, so that the server can use the recognition result from the first object information included in the target video Determining second object information, and identifying target barrage information from the stored barrage information, where the target barrage information includes at least one of the second object information;
  • the second receiving module is configured to receive the target barrage information sent by the server
  • the target barrage information display module is used to display the target barrage information.
  • the embodiments of the present application also provide a server, including: a memory, a processor, and a computer program stored on the memory and running on the processor, the computer When the program is executed by the processor, the steps of the above-mentioned barrage information identification method are realized.
  • the embodiments of the present application also provide an electronic device, including: a memory, a processor, and a computer program stored on the memory and capable of running on the processor, the When the computer program is executed by the processor, the steps of the barrage information display method described above are realized.
  • the recognition result determined after the electronic device recognizes the target image area is obtained, and according to the recognition result, the second object information is determined from the first object information included in the target video, so that the determined second object
  • the information not only belongs to the first object information in the target video, but also belongs to the object information in the target image area. Therefore, the target barrage information including at least one second object information identified from the stored barrage information is related to the target image area.
  • the barrage information related to the object information in and the target video that is, the target barrage information is the target barrage information that the user is interested in. And send the target barrage information to the electronic device, so that the electronic device can display the target barrage information, so that the user can push the barrage information that the user is really interested in, enrich and meet the user's personalized demand for the barrage information.
  • FIG. 1 shows a flowchart of a method for identifying barrage information provided in an embodiment of the present application
  • Figure 2 shows a schematic diagram of a target video provided in an embodiment of the present application
  • FIG. 3 shows a flowchart of another method for identifying barrage information provided in an embodiment of the present application
  • FIG. 4 shows a schematic diagram of supplementing a non-closed area as a closed area provided in an embodiment of the present application
  • FIG. 5 shows a schematic diagram of a display of target barrage information of each of the categories provided in an embodiment of the present application
  • FIG. 6 shows a schematic diagram of an object icon display provided in an embodiment of the present application.
  • FIG. 7 shows a schematic diagram of sorting and highlighting of target barrage information provided in an embodiment of the present application.
  • FIG. 8 shows a block diagram of a server 800 provided in an embodiment of the present application.
  • FIG. 9 shows a block diagram of another server 900 provided in an embodiment of the present application.
  • FIG. 10 shows a block diagram of an electronic device 1000 provided in an embodiment of the present application.
  • FIG. 11 shows a block diagram of another electronic device 1100 provided in an embodiment of the present application.
  • FIG. 12 shows a block diagram of still another electronic device 1200 provided in an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the hardware structure of an electronic device that implements each embodiment of the present application.
  • Step 101 Obtain a recognition result determined after the electronic device recognizes a target image area.
  • the target image area is determined by the electronic device according to a user's first input of a target video, and the target video is a video currently played on the electronic device.
  • the electronic device may include a smart phone, a tablet computer, a notebook computer, etc.
  • the above examples are only examples, and this application does not limit this.
  • the first input may be an operation of the user using a mouse to draw a line on the target video to encircle the target image area in the target video, or the first input may be an operation of a finger on the target video to draw a line on the target video to encircle the target image area in the target video.
  • the embodiment of the present application does not limit the form of the first input.
  • FIG. 2 a schematic diagram of a target video provided in an embodiment of the present application is shown.
  • the inner area of the circle 201 enclosed by the circle 201 in FIG. 2 is the target image area determined by the electronic device 202 according to the user's first input of the target video, and the target image area is a closed area.
  • the first input can be one stroke operation by the user, or multiple stroke operations.
  • the number of target image areas encircled by one stroke operation can be one or multiple, for example, to the number "8"
  • the first input may also be multiple closed areas delineated by multiple stroke operations, and the number of target image areas is not limited in this application. Users can delineate the image area they are interested in by tracing. It should be noted that a group of " ⁇ " in Figure 2 represents a piece of barrage information, and the barrage information is the barrage information published when the user watches the target video.
  • the shape of the target image area may be a circular area as shown in FIG. 2, or may be a closed area of other shapes such as an elliptical area, a square area, etc., which is not limited in this application.
  • the electronic device recognizes the target image area to obtain the recognition result, and the electronic device can send the recognition result to the server, so that the server obtains the recognition result determined after the electronic device recognizes the target image area.
  • the recognition result includes the object information included in the target image area.
  • the recognition result that the electronic device recognizes and determines the target image area in FIG. 2 is object information A, and the object information A may be the name of the A object.
  • Step 102 According to the recognition result, determine second object information from the first object information included in the target video.
  • the first object information included in the target video may be the name of the object.
  • the object is, for example, a certain person or a certain object.
  • the target video is a video with a duration of 40 minutes
  • the first object information included in the target video is information of objects in all frames of video images within the 40 minutes.
  • the recognition result determined in step 101 is object information A
  • the determined second object information is the object information A in the first object information.
  • the recognition result determined in step 101 is the object information A and the object information F
  • the determined second object information is the object information A in the first object information, that is, the determined second object The information is the information belonging to the first object information.
  • Step 103 Identify target barrage information from the stored barrage information, where the target barrage information includes at least one second object information.
  • the stored barrage information is barrage information sent to the server through the electronic device by the user watching the target video.
  • the server may store the barrage information, and identify barrage information including at least one second object information from the stored barrage information. For example, if the second object information determined in step 102 is object information A, the barrage information including object information A is identified from the stored barrage information. If the second object information determined in step 102 is object information A, object information B, and object information C, identify target barrage information including at least one second object information from the stored barrage information.
  • the server stores 10 pieces of barrage information, and the determined second object information is object information A.
  • Object information B and object information C among them, among the 10 barrage information, barrage information 1 includes object information A, barrage information 2 includes object information B, barrage information 3 includes object information B, barrage information 4 includes object information A, and barrage information 5 includes Object information C and barrage information 6 include object information B, barrage information 7 includes object information A, barrage information 8 includes object information A, barrage information 9 includes object information E, and barrage information 10 includes object information F.
  • the first column is the stored barrage information
  • the second column is the object information included in the barrage information
  • the third column is the determined second object information
  • the third column is the target barrage information
  • the fourth column is the target barrage information. It is listed as whether the barrage information is target barrage information. For example, because barrage information 1 includes object information A, it is recognized that barrage information 1 is target barrage information, and barrage information 2, barrage information 3, Barrage information 4, barrage information 5, barrage information 6, barrage information 7, and barrage information 8 are target barrage information.
  • Step 104 Send target barrage information to the electronic device.
  • the barrage information recognition method obtains the recognition result determined after the electronic device recognizes the target image area, and according to the recognition result, determines the second object information from the first object information included in the target video, thereby making the determination
  • the second object information of not only belongs to the first object information in the target video, but also belongs to the object information in the target image area. Therefore, the target barrage information including at least one second object information identified from the stored barrage information is The barrage information related to the object information in the target image area and the target video, that is, the target barrage information is the target barrage information that the user is interested in.
  • send the target barrage information to the electronic device so that the electronic device can display the target barrage information, so that the user can push the barrage information that the user is really interested in, enrich and meet the user's personalized demand for the barrage information.
  • FIG. 3 a flowchart of another method for identifying barrage information provided in an embodiment of the present application is shown.
  • Step 301 The electronic device receives a user's first input to the target video played on the electronic device.
  • Step 302 In response to the first input, the electronic device determines the target frame image of the target video, and determines the target image area on the target frame image.
  • determining the target frame image of the target video and determining the target image area on the target frame image can be achieved through the following steps:
  • the scribed track is an unclosed area, supplement the unclosed area as a closed area, and use the closed area on the target frame image as the target image area;
  • the closed area composed of the scribed trajectory is taken as the target image area.
  • the frame image of the target video corresponding to the end time of the first input as the target frame image can ensure the integrity of the object information included in the target image area determined on the target frame image according to the scribe track. For example, if the user is performing a circle drawing operation, if one frame of the image only includes the A object when the half circle is drawn, and the frame of the image at the end of the entire circle includes the A object and the B object, and the user The scribed track of is a closed area including the A object and the B object. If a frame of image when a half circle is drawn is taken as the target frame image, the object information included in the target image area may only include the A object, so that the object information included in the target image area is not complete.
  • FIG. 4 shows a schematic diagram of supplementing an unclosed area as a closed area provided in an embodiment of the present application, that is, supplementing the unclosed area corresponding to the scribe track as a closed area through the dashed line in FIG. 4.
  • Step 303 The electronic device recognizes the target image area, obtains the recognition result, and sends the recognition result to the server.
  • Step 304 The server obtains the barrage information of the target video.
  • Step 305 The server performs semantic analysis on the barrage information to obtain at least one keyword of the barrage information.
  • performing semantic analysis on the barrage information to obtain at least one keyword of the barrage information may further include the following steps:
  • the symbol information in the barrage information is removed.
  • the symbol information can be removed first. If a piece of barrage information contains all symbol information, there is no need to perform semantic analysis on the barrage information after removing the symbol information. For a piece of barrage information including more symbol information, only valid text information remains after removing the symbol information, which is beneficial to increase the rate of semantic analysis of valid text information.
  • symbol information such as punctuation marks, emoticons, and other symbol information
  • step 305 performs semantic analysis on the barrage information, and obtaining at least one keyword of the barrage information can be achieved through the following steps:
  • the keyword is, for example, the name of the object.
  • Step 306 Match each keyword included in the barrage information with all the first object information.
  • step 307 is executed; in the case that each keyword does not match any piece of third object information , Go to step 308.
  • the keywords of a barrage information include: the name of object A (keyword 1), the name of object B (keyword 2), and so on, the name of object C (keyword 3), which is different here One repeats.
  • the first object information includes object information A, object information B, object information C, object information D, and object information E
  • keyword 1 in the barrage information matches object information A
  • keyword 2 matches object information B
  • the object information A is a piece of third object information in all the first object information
  • the object information B is also a piece of third object information in all the first object information.
  • the keyword of barrage information 8 is keyword 1; for barrage information 9
  • the keyword of the barrage information 9 obtained is keyword 5 (the name of the E object); after semantic analysis of the barrage information 10, the keyword of the barrage information 10 obtained is keyword 6 ( F object name).
  • the first column is the barrage information
  • the second column is the keyword of the barrage information
  • the third column is all the first object information of the target video.
  • the first object information may be the name of the object, for example, the object information A is the name of the object A, and the object information B is the name of the object B. Since the keyword is the name of the object, and the first object information is also the name of the object, the keyword can be matched with the first object information.
  • the keyword 1 of the barrage information 1 matches the object information A in all the first object information; the keyword 2 of the barrage information 2 matches the object information B in all the first object information; Keyword 2 of barrage information 3 matches object information B in all first object information; Keyword 1 of barrage information 4 matches object information A in all first object information; Keyword 3 of barrage information 5 It matches with the object information C in all the first object information; whether other barrage information matches with a third object information in all the first object information will not be illustrated one by one. It should be noted that since the keyword of the barrage information 10 is keyword 6 (the name of the object F), there is no third object information that matches the keyword 6 in all the first object information.
  • Step 307 The server stores the third object information and the barrage information associated with the third object information.
  • the keyword 1 of the bullet screen information 1 matches the object information A (third object information) in all the first object information, and the object information A and the bullet information associated with the object information A are stored 1;
  • the keyword 2 of the barrage information 2 matches the object information B (third object information) in all the first object information, whether each keyword in the barrage information matches one of all the first object information
  • the third object information matching will not be introduced by examples one by one. For details, refer to the storage of the third object information and the barrage information associated with the third object information as shown in Table 3 below. Refer to Table 3:
  • Third object information Barrage information associated with third object information Object information Object information
  • B Barrage Information 6 Object information
  • a Barrage Information 7 Object information
  • a Barrage Information 8 Object information
  • Step 308 The server stores irrelevant information and barrage information associated with the irrelevant information.
  • the bullet screen information 10 is irrelevant information, that is, the bullet
  • the screen information is information that is irrelevant to the object information in the target image area, so the irrelevant information and the barrage information 10 associated with the irrelevant information are stored.
  • Step 309 The server obtains the recognition result determined after the electronic device recognizes the target image area.
  • Step 310 In a case where the recognition result matches at least one piece of first object information in all the first object information, the server uses the first object information matching the recognition result as the second object information.
  • all first object information including object information A, object information B, object information C, object information D, and object information E is still taken as an example.
  • the server obtains information in step 309 If the recognition result is object information A, at least one piece of first object information that matches the recognition result in this step is object information A in all the first object information, and then the object information A is taken as the second object information. If the recognition result obtained by the server in step 309 is object information A, object information B, and object information C, then at least one piece of first object information that matches the recognition result in this step includes all the object information A in the first object information. , Object information B and object information C, then take object information A, object information B, and object information C as the second object information. If the recognition result obtained by the server in step 309 is the object information F, there is no object information matching the object information F among all the first object information.
  • Step 311 In a case where there is barrage information including at least one second object information in the stored barrage information, the server uses the barrage information including at least one second object information as target barrage information.
  • the stored barrage information includes barrage information associated with the third object information and/or barrage information associated with irrelevant information.
  • the server can directly associate the third object information with the barrage information. Determine whether the object information includes at least one second object information, and if the third object information associated with the barrage information includes at least one second object information, then the barrage information associated with the third object information is used as the target barrage Information, the server does not need to recognize whether the barrage information includes at least one second object information to determine the target barrage information, thereby speeding up the efficiency of identifying the target barrage information to a certain extent, and then quickly removing the target barrage information. The information is sent to the electronic device for display.
  • the second object information is object information A
  • barrage information 1 is associated with the third object information (object information A)
  • the relationship between the two can be directly determined. It is determined in the third object information (object information A) that the third object information (object information A) includes object information that matches the second object information (object information A). Therefore, the barrage information 1 is the target barrage information. There is no need to perform semantic analysis on the barrage information after determining the second object information, and then determine which barrage information belongs to the target barrage information, so the efficiency of identifying the target barrage information is improved to a certain extent.
  • the stored barrage information includes barrage information associated with the third object information and/or barrage information associated with irrelevant information whether there is barrage information including at least one second object information, and if it exists, it includes at least For the barrage information of one second object information, the barrage information including at least one second object information is used as the target barrage information; if there is no barrage information including at least one second object information, it is considered that the target is not recognized Barrage information.
  • the server will not recognize the target barrage information, and the server can send prompt information to the electronic device to prompt the user to re-execute the target barrage information. Delineate an area on the video. If the stored barrage information includes barrage information associated with the third object information, the barrage information that includes at least one second object information can be identified in the stored barrage information, and the identified barrage information includes at least one second object The barrage information of the information serves as the target barrage information. For example, if the second object information includes object information A, object information B, and object information C, according to Table 3, the target barrage information includes barrage information 1 to barrage information 8.
  • Step 312 If there are multiple second object information, the server classifies the target barrage information according to each second object information, and obtains target barrage information of each category.
  • the server can classify target barrage information according to each second object information, and obtain target barrage information of each category, as shown in the following table 4 shows:
  • barrage information can include multiple object information, such as object information A, object information B, and object information C
  • the barrage information including multiple object information can be combined with object information A or object information.
  • Information B or object information C is grouped into one category, or can be grouped into one category independently of object information A, object information B, and object information C. This application does not limit this. In this embodiment, only barrage information is used. Include a second object information for example.
  • the target barrage information of category 1 includes barrage information 1, barrage information 4, barrage information 7 and barrage information 8.
  • the target barrage information of category 2 includes barrage information 2, barrage information 3, and Barrage information 6;
  • Target barrage information of category 3 includes barrage information 5.
  • Step 313 The server sends target barrage information of each category to the electronic device.
  • the electronic device receives the target barrage information of each category sent by the server; and displays the target barrage information of each category according to each category.
  • FIG. 5 there is shown a schematic diagram of a display of target barrage information of each category and provided in an embodiment of the present application.
  • a shown in FIG. 5 is the object corresponding to the object information A (second object information), and the target barrage information of category 1 is displayed on the right side of A.
  • a group of " ⁇ " represents a piece of barrage information.
  • the right side of category 1 shows 4 pieces of target barrage information (barrage information 1, barrage information 4, barrage information 7 and barrage information 8), and other types of targets The barrage information will not be introduced one by one.
  • the server generates a thumbnail according to each of the second object information
  • the server sends the thumbnail to the electronic device for the electronic device to display the thumbnail.
  • the server can send thumbnails to the electronic device after sending the target barrage information of each category to the electronic device; it can also send the thumbnail to the electronic device before sending the target barrage information of each category to the electronic device. Thumbnail; or when sending target barrage information of each category to the electronic device, the thumbnail is also sent to the electronic device.
  • This application does not limit the timing of the server sending the thumbnail to the electronic device.
  • the electronic device receives the thumbnails sent by the server, the thumbnails are generated by the server according to each second object information; the thumbnails are displayed.
  • the electronic device may receive a second input from the user to the thumbnail; in response to the second input, control the moving direction and moving distance of the target barrage information of each category, or hide the thumbnail.
  • the electronic device may display a thumbnail 501 in the upper left corner of FIG. 5.
  • the second input is the corresponding input when the user drags the thumbnail 501 to slide up and down or to the left, and drag the thumbnail 501 to slide up.
  • the target barrage information of each category can be swiped up. If the thumbnail 501 is dragged to slide down, the target barrage information of each category can be slid down. If the thumbnail 501 is dragged and swiped to the left, the thumbnail 501 can be hidden. It should be noted that the sliding distance of the target barrage information of each category may be the same as the sliding distance of the thumbnail 501. Through this step, the user can control the display position of the barrage in the target video or hide the thumbnail.
  • FIG. 6 there is shown a schematic diagram of an object icon display provided in an embodiment of the present application.
  • the third input can be used for the user to click the thumbnail 501 shown in FIG. 5, and the electronic device expands the thumbnail in response to the third input.
  • the object icons corresponding to each category included in 501 are shown in Figure 6: the object icon corresponding to category 1 is "A”, the object icon corresponding to category 2 is "B”, and the object icon corresponding to category 3 is " C”.
  • FIG. 7 there is shown a schematic diagram of sorting and highlighting of target barrage information provided in an embodiment of the present application.
  • the fourth input can be that the user drags the object icon to change the display position of the dragged object icon and to change the display order of the target barrage information of the category corresponding to the dragged object icon, as shown in Figure 7, when the user drags After the object icon A is moved to the position of the object icon B in FIG. 6, the positions of the object icon A and the object icon B are exchanged, and the positions of the target barrage information of category 1 and the target barrage information of category 2 are exchanged. That is, the target barrage information of category 1 is changed from the first row of FIG. 6 to the second row of FIG. 7, and the target barrage information of category 2 is changed from the second row of FIG. 6 to the first row.
  • the target category corresponding to the second target object icon is determined, the target barrage information of the target category is highlighted, and the target barrage information of the target category after the highlighting process is displayed.
  • the fifth input can be the user's operation to click on the object icon expanded from the thumbnail, as shown in Figure 7.
  • the electronic device can determine that the target category corresponding to the object icon A is category 1.
  • the target barrage information of the target barrage information is highlighted, and the target barrage information of the target category after the highlighting process is displayed.
  • Prominent processing is performed on target barrage information of the target category, for example, bolding, enlarging or changing the color of the font of the target barrage information.
  • FIG. 7 shows the highlighting process of bolding the font of the target barrage information of category 1.
  • the server 800 includes:
  • the first acquiring module 810 is configured to acquire the recognition result determined after the electronic device recognizes the target image area, the target image area being determined by the electronic device according to the user's first input of the target video, and the target video is The video currently playing on the electronic device;
  • the determining module 820 is configured to determine second object information from the first object information included in the target video according to the recognition result;
  • the identification module 830 is configured to identify target barrage information from the stored barrage information, where the target barrage information includes at least one of the second object information;
  • the sending module 840 is configured to send the target barrage information to the electronic device.
  • FIG. 9 a block diagram of another server 900 provided in an embodiment of the present application is shown, and the server 900 may further include:
  • the second obtaining module 910 is configured to obtain barrage information of the target video
  • the analysis module 920 is configured to perform semantic analysis on the barrage information to obtain at least one keyword of the barrage information;
  • a matching module 930 configured to match each of the keywords included in the barrage information with all the first object information
  • the storage module 940 is configured to store the third object information and the third object information when one keyword in each of the keywords matches one third object information in all the first object information.
  • the barrage information associated with the third object information is configured to store the third object information and the third object information when one keyword in each of the keywords matches one third object information in all the first object information.
  • the server 900 may further include:
  • the removing module 950 is configured to remove the symbol information in the barrage information when the barrage information includes the symbol information;
  • the analysis module 920 is specifically configured to perform semantic analysis on the barrage information from which the symbol information is removed, and obtain at least one of the keywords of the barrage information.
  • the storage module 940 is further configured to store irrelevant information and the bullets associated with the irrelevant information when each of the keywords does not match any one of the third object information. Screen information.
  • the identification module 830 is specifically configured to include at least one piece of second object information in the case that there is at least one piece of the second object information in the stored barrage information
  • the barrage information of is used as the target barrage information; the stored barrage information includes barrage information associated with the third object information and/or the barrage information associated with the irrelevant information.
  • the determining module 820 is specifically configured to: in the case that the recognition result matches at least one piece of first object information in all the first object information, select the one that matches the recognition result The first object information serves as the second object information.
  • the server 900 may further include:
  • the category division module 960 is configured to, if there are multiple second object information, classify the target barrage information according to each of the second object information to obtain target barrage information of each category ;
  • the sending module 840 is specifically configured to send target barrage information of each category to the electronic device.
  • the sending module 840 is further configured to generate a thumbnail according to each of the second object information; and send the thumbnail to the electronic device for the electronic device to display the thumbnail.
  • the electronic device 1000 includes:
  • the first receiving module 1010 is configured to receive a user's first input to the target video played on the electronic device;
  • the determining module 1020 is configured to determine a target frame image of the target video in response to the first input, and determine a target image area on the target frame image;
  • the recognition module 1030 is configured to recognize the target image area, obtain the recognition result, and send the recognition result to the server, so that the server can obtain the first object information included in the target video according to the recognition result Determining second object information in the, and identifying target barrage information from the stored barrage information, where the target barrage information includes at least one of the second object information;
  • the second receiving module 1040 is configured to receive the target barrage information sent by the server;
  • the target barrage information display module 1050 is configured to display the target barrage information.
  • the determining module 1020 is specifically configured to determine a scribe track corresponding to the first input in response to the first input;
  • the scribed track is the non-closed area
  • supplement the non-closed area as a closed area and use the closed area on the target frame image as the target image area;
  • the closed area formed by the scribed trajectory is used as the target image area.
  • the second receiving module 1040 is specifically configured to receive target barrage information of each category sent by the server, where the target barrage information of each category is determined by the server according to each second object The information is obtained by classifying the target barrage information; and displaying the target barrage information of each category according to each of the categories.
  • FIG. 11 a block diagram of another electronic device 1100 provided in an embodiment of the present application is shown, and the electronic device 1100 may further include:
  • the third receiving module 1110 is configured to receive thumbnails sent by the server, where the thumbnails are generated by the server according to each of the second object information;
  • the thumbnail display module 1120 is configured to display the thumbnail
  • the fourth receiving module 1130 is configured to receive a second input from the user to the thumbnail
  • the control module 1140 is configured to control the moving direction and the moving distance of the target barrage information of each category in response to the second input, or hide the thumbnail.
  • FIG. 12 a block diagram of another electronic device 1200 provided in an embodiment of the present application is shown, and the electronic device 1200 may further include:
  • the fifth receiving module 1210 is configured to receive the third input of the user to the thumbnail
  • the object icon expansion module 1220 is configured to expand the object icon corresponding to each category included in the thumbnail in response to the third input;
  • the sixth receiving module 1230 is configured to receive the fourth input of the user on the first target object icon among all the object icons;
  • the sorting module 1240 is configured to adjust the position of the target object icon in the thumbnail in response to the fourth input, and adjust the position of the object icon corresponding to each category in the thumbnail according to the position of the target object icon in the thumbnail. Sort the target barrage information of each of the categories to obtain the sorting result;
  • the sorting result display module 1250 is used to display the sorting result.
  • the electronic device 1200 may further include:
  • the seventh receiving module 1260 is configured to receive the fifth input of the user on the second target object icon among all the object icons;
  • the processing module 1270 is configured to determine the target category corresponding to the second target object icon in response to the fifth input, perform highlight processing on the target barrage information of the target category, and display the highlighted target Target barrage information of the target category.
  • FIG. 13 is a schematic diagram of the hardware structure of an electronic device that implements each embodiment of the present application.
  • the electronic device 1300 includes, but is not limited to: a radio frequency unit 1301, a network module 1302, an audio output unit 1303, an input unit 1304, a sensor 1305, a display unit 1306, a user input unit 1307, an interface unit 1308, a memory 1309, a processor 1310, and Power supply 1311 and other components.
  • Those skilled in the art can understand that the structure of the electronic device shown in FIG. 13 does not constitute a limitation on the electronic device.
  • the electronic device may include more or less components than those shown in the figure, or a combination of certain components, or different components. Layout.
  • electronic devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminals, wearable devices, and
  • the processor 1310 is configured to receive a user's first input to the target video played on the electronic device;
  • the target image area Recognize the target image area, obtain the recognition result, and send the recognition result to the server for the server to determine the second object information from the first object information included in the target video according to the recognition result , And identify target barrage information from the stored barrage information, where the target barrage information includes at least one of the second object information;
  • the target image area is recognized, the recognition result is obtained, and the recognition result is sent to the server, so that the server can obtain the recognition result from the recognition result according to the recognition result.
  • the second object information is determined from the first object information included in the target video, and the target barrage information is identified from the stored barrage information, where the target barrage information includes at least one of the second object information, and is received and displayed Target barrage information sent by the server. In this way, the target barrage information related to the object information of the target image area that the user is interested in is displayed, and the personalized needs of the user are met.
  • the radio frequency unit 1301 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, after receiving the downlink data from the base station, it is processed by the processor 1310; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 1301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 1301 can also communicate with the network and other devices through a wireless communication system.
  • Electronic devices provide users with wireless broadband Internet access through the network module 1302, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 1303 may convert the audio data received by the radio frequency unit 1301 or the network module 1302 or stored in the memory 1309 into audio signals and output them as sounds. Moreover, the audio output unit 1303 may also provide audio output related to a specific function performed by the electronic device 1300 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 1303 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 1304 is used to receive audio or video signals.
  • the input unit 1304 may include a graphics processing unit (GPU) 13041 and a microphone 13042, and the graphics processor 13041 is configured to respond to images of still pictures or videos obtained by an image capture device (such as a camera) in the video capture mode or the image capture mode.
  • the data is processed.
  • the processed image frame can be displayed on the display unit 1306.
  • the image frame processed by the graphics processor 13041 may be stored in the memory 1309 (or other storage medium) or sent via the radio frequency unit 1301 or the network module 1302.
  • the microphone 13042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to the mobile communication base station via the radio frequency unit 1301 for output in the case of a telephone call mode.
  • the electronic device 1300 further includes at least one sensor 1305, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 13061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 13061 and 13061 when the electronic device 1300 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensors 1305 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the display unit 1306 is used to display information input by the user or information provided to the user.
  • the display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 1307 can be used to receive input numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user input unit 1307 includes a touch panel 13071 and other input devices 13072.
  • the touch panel 13071 also known as a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 13071 or near the touch panel 13071. operating).
  • the touch panel 13071 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 1310, the command sent by the processor 1310 is received and executed.
  • the touch panel 13071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 1307 may also include other input devices 13072.
  • other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 13071 can cover the display panel 13061.
  • the touch panel 13071 detects a touch operation on or near it, it transmits it to the processor 1310 to determine the type of the touch event, and then the processor 1310 determines the type of the touch event according to the touch.
  • the type of event provides corresponding visual output on the display panel 13061.
  • the touch panel 13071 and the display panel 13061 are used as two independent components to realize the input and output functions of the electronic device, but in some embodiments, the touch panel 13071 and the display panel 13061 can be integrated
  • the implementation of the input and output functions of the electronic device is not specifically limited here.
  • the interface unit 1308 is an interface for connecting an external device and the electronic device 1300.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 1308 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 1300 or can be used to connect to the electronic device 1300 and the external device. Transfer data between devices.
  • the memory 1309 can be used to store software programs and various data.
  • the memory 1309 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 1309 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 1310 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device. It runs or executes the software programs and/or modules stored in the memory 1309, and calls the data stored in the memory 1309. , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
  • the processor 1310 may include one or more processing units; optionally, the processor 1310 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
  • the adjustment processor mainly deals with wireless communication. It can be understood that the above-mentioned modem processor may not be integrated into the processor 1310.
  • the electronic device 1300 may also include a power source 1311 (such as a battery) for supplying power to various components.
  • a power source 1311 such as a battery
  • the power source 1311 may be logically connected to the processor 1313 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the electronic device 1300 includes some functional modules not shown, which will not be repeated here.
  • an embodiment of the present application further provides an electronic device, including a processor 1310, a memory 1309, a computer program stored in the memory 1309 and running on the processor 1310, and the computer program is executed by the processor 1310
  • an electronic device including a processor 1310, a memory 1309, a computer program stored in the memory 1309 and running on the processor 1310, and the computer program is executed by the processor 1310
  • the embodiment of the present application also provides a computer-readable storage medium on which a computer program is stored.
  • a computer program is stored.
  • the computer program is executed by a processor, each process of the above-mentioned method for identifying barrage information is realized, and the same can be achieved. In order to avoid repetition, I won’t repeat them here.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例提供了一种弹幕信息识别方法、显示方法、服务器及电子设备。该方法包括:获取电子设备对目标图像区域进行识别后确定的识别结果,该目标图像区域为电子设备根据用户对目标视频的第一输入确定的,该目标视频为电子设备上当前播放的视频;根据上述识别结果,从上述目标视频包括的第一对象信息中确定第二对象信息;从存储的弹幕信息中识别目标弹幕信息,该目标弹幕信息包括至少一个上述第二对象信息;向电子设备发送上述目标弹幕信息。

Description

弹幕信息识别方法、显示方法、服务器及电子设备
相关申请的交叉引用
本申请主张2019年10月17日提交国家知识产权局、申请号为201910990386.8、申请名称为“弹幕信息识别方法、显示方法、服务器及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及数据处理技术领域,尤其涉及一种弹幕信息识别方法、显示方法、服务器及电子设备。
背景技术
随着电子设备数量的高速增长以及移动网络的速度与稳定性的提高,结合移动通信网特点的电子设备应用必然成为用户的很重要的需求。
目前,使用电子设备的用户在观看视频的过程中,如果想看弹幕信息,在开启弹幕开关后会看到大量的弹幕信息,这些信息都是其他观看本视频的用户发表的一些评论或吐槽,每个用户发的弹幕信息都会出现在视频的特定时间点,这些弹幕信息能够增加用户之间的互动,也让观看视频变得更加有趣。
然而,当用户需要屏蔽那些自己不感兴趣的弹幕时,一般需要用户直接关闭弹幕开关,而直接关掉弹幕开关则会影响用户之间的互动体验,也少了一些观看视频的乐趣。因此,如何为用户推送用户真正感兴趣的弹幕信息,逐渐成为个性化服务领域亟待解决的技术问题。
发明内容
本申请实施例提供一种弹幕信息识别方法、显示方法、服务器及电子设备,以解决相关技术中无法为用户推送用户真正感兴趣的弹幕信息,丰富并满足用户对弹幕信息的个性化需求的问题。
为了解决上述技术问题,本申请是这样实现的:
根据本申请实施例的第一方面,本申请实施例提供了一种弹幕信息识别方法,所述方法应用于服务器,所述方法包括:
获取电子设备对目标图像区域进行识别后确定的识别结果,所述目标图像区域为所述电子设备根据用户对目标视频的第一输入确定的,所述目标视频为所述电子设备上当前播放的视频;
根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息;
从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
向所述电子设备发送所述目标弹幕信息。
根据本申请实施例的第二方面,本申请实施例还提供了一种弹幕信息显示方法,所述方法应用于电子设备,所述方法包括:
接收用户对所述电子设备上播放的目标视频的第一输入;
响应于所述第一输入,确定所述目标视频的目标帧图像,并确定所述目标帧图像上的目标图像区域;
对所述目标图像区域进行识别,获得识别结果,并向服务器发送所述识别结果,以供所述服务器根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息,并从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
接收所述服务器发送的所述目标弹幕信息;
显示所述目标弹幕信息。
根据本申请实施例的第三方面,本申请实施例还提供了一种服务器,包括:
第一获取模块,用于获取电子设备对目标图像区域进行识别后确定的识别结果,所述目标图像区域为所述电子设备根据用户对目标视频的第一输入确定的,所述目标视频为所述电子设备上当前播放的视频;
确定模块,用于根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息;
识别模块,用于从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
发送模块,用于向所述电子设备发送所述目标弹幕信息。
根据本申请实施例的第四方面,本申请实施例还提供了一种电子设备,其特征在于,包括:
第一接收模块,用于接收用户对所述电子设备上播放的目标视频的第一输入;
确定模块,用于响应于所述第一输入,确定所述目标视频的目标帧图像,并确定所述目标帧图像上的目标图像区域;
识别模块,用于对所述目标图像区域进行识别,获得识别结果,并向服务器发送所述识别结果,以供所述服务器根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息,并从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
第二接收模块,用于接收所述服务器发送的所述目标弹幕信息;
目标弹幕信息显示模块,用于显示所述目标弹幕信息。
根据本申请实施例的第五方面,本申请实施例还提供了一种服务器,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述所述的弹幕信息识别方法的步骤。
根据本申请实施例的第六方面,本申请实施例还提供了一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述所述的弹幕信息显示方法的步骤。
在本申请实施例中,通过获取电子设备对目标图像区域进行识别后确定的识别结果,根据识别结果,从目标视频包括的第一对象信息中确定第二对象信息,从而使确 定的第二对象信息不但属于目标视频中的第一对象信息,而且属于目标图像区域中的对象信息,因此从存储的弹幕信息中识别出的包括至少一个第二对象信息的目标弹幕信息是与目标图像区域中的对象信息相关、且与目标视频相关的弹幕信息,也即目标弹幕信息是用户感兴趣的目标弹幕信息。并向电子设备发送目标弹幕信息,从而使电子设备可以展示目标弹幕信息,从而用能够为用户推送用户真正感兴趣的弹幕信息,丰富并满足用户对弹幕信息的个性化需求。
附图说明
图1示出了本申请实施例中提供的一种弹幕信息识别方法的流程图;
图2示出了本申请实施例中提供的一种目标视频的示意图;
图3示出了本申请实施例中提供的另一种弹幕信息识别方法的流程图;
图4示出了本申请实施例中提供的一种非闭合区域补充为闭合区域的示意图;
图5示出了本申请实施例中提供的一种与每个所述类别的目标弹幕信息显示的示意图;
图6示出了本申请实施例中提供的一种对象图标展示示意图;
图7示出了本申请实施例中提供的一种目标弹幕信息排序以及突出处理的示意图;
图8示出了本申请实施例中提供的一种服务器800的框图;
图9示出了本申请实施例中提供的另一种服务器900的框图;
图10示出了本申请实施例中提供的一种电子设备1000的框图;
图11示出了本申请实施例中提供的另一种电子设备1100的框图;
图12示出了本申请实施例中提供的又一种电子设备1200的框图;
图13为实现本申请各个实施例的一种电子设备的硬件结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
下面结合具体实施例对本申请进行详细说明。
参照图1,示出了本申请实施例中提供的一种弹幕信息识别方法的流程图,应用于服务器,方法具体可以包括如下步骤:
步骤101、获取电子设备对目标图像区域进行识别后确定的识别结果,目标图像区域为电子设备根据用户对目标视频的第一输入确定的,目标视频为电子设备上当前播放的视频。
电子设备可以包括智能手机、平板电脑、笔记本电脑等,上述示例只是举例说明,本申请对此不作限定。第一输入可以为用户使用鼠标在目标视频上划线,以圈定目标视频中的目标图像区域的操作,或者第一输入为手指在目标视频上划线圈定目标视频中的目标图像区域的操作。本申请实施例对第一输入的形式不进行限制。
例如参照图2,示出了本申请实施例中提供的一种目标视频的示意图。图2中的圆 圈201所圈定的圆圈201的内部区域即为电子设备202根据用户对目标视频的第一输入确定的目标图像区域,目标图像区域为闭合区域。第一输入可以为用户的一次划线操作,也可以为多次划线操作,一次划线操作所圈定的目标图像区域的个数可以为一个,也可以为多个,例如向数字“8”所圈定的目标图像区域则为两个,即有两个闭合区域。第一输入也可以为多次划线操作所圈定的多个闭合区域,本申请对目标图像区域的个数不限制。用户可以通过划线操作圈定自己感兴趣的图像区域。需要说明的是,图2中的一组“★”表示一条弹幕信息,弹幕信息为用户观看目标视频时发表的弹幕信息。
需要说明的是,目标图像区域的形状可以为如图2的圆形区域,也可以为椭圆形区域、方形区域等其他形状的闭合区域,本申请对此不进行限制。
电子设备对目标图像区域进行识别,获得识别结果,电子设备可以将识别结果发送给服务器,从而使服务器获取到电子设备对目标图像区域进行识别后确定的识别结果。识别结果中包括目标图像区域包括的对象信息。例如,电子设备对图2中的目标图像区域进行识别确定的识别结果为对象信息A,对象信息A可以为A对象的名称。
步骤102、根据识别结果,从目标视频包括的第一对象信息中确定第二对象信息。
目标视频包括的第一对象信息可以为对象的名称。对象例如为某个人或者某个物体等。如果目标视频为时长40分钟的视频,则目标视频包括的第一对象信息为该40分钟内的所有帧视频图像中的对象的信息。以第一对象信息包括对象信息A、对象信息B、对象信息C、对象信息D、以及对象信息E为例,如果在步骤101中确定的识别结果为对象信息A,则在本步骤中根据识别结果,确定的第二对象信息为第一对象信息中的对象信息A。如果在步骤101中确定的识别结果为对象信息A和对象信息F,则在本步骤中根据识别结果,确定的第二对象信息为第一对象信息中的对象信息A,即确定的第二对象信息是属于第一对象信息中的信息。
步骤103、从存储的弹幕信息中识别目标弹幕信息,目标弹幕信息包括至少一个第二对象信息。
存储的弹幕信息为观看目标视频的用户通过电子设备向服务器发送的弹幕信息,服务器可以存储弹幕信息,并从存储的弹幕信息中识别包括至少一个第二对象信息的弹幕信息。例如如果在步骤102中确定的第二对象信息为对象信息A,则从存储的弹幕信息中识别包括对象信息A的弹幕信息。如果在步骤102中确定的第二对象信息为对象信息A、对象信息B和对象信息C,则从存储的弹幕信息中识别包括至少一个第二对象信息的目标弹幕信息。
为了更加清楚介绍上述步骤,下面结合表1进行说明。如果目标视频包括的第一对象信息为:对象信息A、对象信息B、对象信息C、对象信息D、以及对象信息E,服务器中共存储10条弹幕信息,确定的第二对象信息为对象信息A、对象信息B和对象信息C。其中,10条弹幕信息中的弹幕信息1包括对象信息A,弹幕信息2包括对象信息B、弹幕信息3包括对象信息B、弹幕信息4包括对象信息A、弹幕信息5包括对象信息C、弹幕信息6包括对象信息B,弹幕信息7包括对象信息A、弹幕信息8包括对象信息A,弹幕信息9包括对象信息E,弹幕信息10包括对象信息F。如下表1所示,第一列为存储的弹幕信息,第二列为弹幕信息包括的对象信息,第三列为确定的第二对象信息,第三列为目标弹幕信息,第四列为弹幕信息是否为目标弹幕信息, 例如,由于弹幕信息1包括对象信息A,因此识别出弹幕信息1是目标弹幕信息,同样识别出弹幕信息2、弹幕信息3、弹幕信息4、弹幕信息5、弹幕信息6、弹幕信息7和弹幕信息8是目标弹幕信息。
Figure PCTCN2020120415-appb-000001
表1
步骤104、向电子设备发送目标弹幕信息。
本实施例提供的弹幕信息识别方法,通过获取电子设备对目标图像区域进行识别后确定的识别结果,根据识别结果,从目标视频包括的第一对象信息中确定第二对象信息,从而使确定的第二对象信息不但属于目标视频中的第一对象信息,而且属于目标图像区域中的对象信息,因此从存储的弹幕信息中识别出的包括至少一个第二对象信息的目标弹幕信息是与目标图像区域中的对象信息相关、且与目标视频相关的弹幕信息,也即目标弹幕信息是用户感兴趣的目标弹幕信息。并向电子设备发送目标弹幕信息,从而使电子设备可以展示目标弹幕信息,从而用能够为用户推送用户真正感兴趣的弹幕信息,丰富并满足用户对弹幕信息的个性化需求。
参照图3,示出了本申请实施例中提供的另一种弹幕信息识别方法的流程图。
步骤301、电子设备接收用户对电子设备上播放的目标视频的第一输入。
步骤302、电子设备响应于第一输入,确定目标视频的目标帧图像,并确定目标帧图像上的目标图像区域。
其中,响应于第一输入,确定目标视频的目标帧图像,并确定目标帧图像上的目标图像区域,可以通过如下步骤实现:
响应于第一输入,确定与第一输入对应的划线轨迹;
确定与第一输入的结束时刻对应的目标视频的帧图像,将与第一输入的结束时刻对应的目标视频的帧图像作为目标帧图像;
在划线轨迹为非闭合区域的情况下,将非闭合区域补充为闭合区域,并将目标帧 图像上的闭合区域作为目标图像区域;
在划线轨迹不为非闭合区域的情况下,将划线轨迹组成的闭合区域作为目标图像区域。
需要说明的是,将与第一输入的结束时刻对应的目标视频的帧图像作为目标帧图像,可以保证根据划线轨迹在目标帧图像上确定的目标图像区域所包括的对象信息的完整性。例如如果用户在进行画圈的划线操作时,如果在画了半个圆圈时的一帧图像只包括A对象,而在整个画圈结束时的一帧图像包括A对象和B对象,而用户的划线轨迹为包括A对象和B对象的闭合区域。如果以画了半个圆圈时的一帧图像作为目标帧图像,则可能会使目标图像区域包括的对象信息只包括A对象,从而是目标图像区域包括的对象信息不够完整。
其中,在划线轨迹为非闭合区域的情况下,需要将非闭合区域补充为闭合区域。例如参照图4,图4示出了本申请实施例中提供的一种非闭合区域补充为闭合区域的示意图,即通过图4的虚线将划线轨迹对应的非闭合区域补充为闭合区域。
步骤303、电子设备对目标图像区域进行识别,获得识别结果,并向服务器发送识别结果。
步骤304、服务器获取目标视频的弹幕信息。
步骤305、服务器对弹幕信息进行语义分析,获得弹幕信息的至少一个关键字。
需要说明的是,在步骤305、对弹幕信息进行语义分析,获得弹幕信息的至少一个关键字之前,还可以包括如下步骤:
在弹幕信息中包括符号信息的情况下,去除弹幕信息中的符号信息。
如果弹幕信息中包括符号信息,例如标点符号、表情符号等符号信息,则可以先去除符号信息。如果一条弹幕信息中全是符号信息,则去除符号信息后无需再对该弹幕信息进行语义分析。针对某条弹幕信息中包括较多的符号信息,去除符号信息后只剩余的有效文字信息,有利于提高对有效文字信息进行语义分析的速率。
相应的,步骤305对弹幕信息进行语义分析,获得弹幕信息的至少一个关键字可以通过如下步骤实现:
对去除符号信息的弹幕信息进行语义分析,获得弹幕信息的至少一个关键字。
其中,关键字例如为对象的名称。
步骤306、将弹幕信息包括的每个关键字与所有第一对象信息进行匹配。
在每个关键字中的一个关键字与所有第一对象信息中的一个第三对象信息匹配的情况下,执行步骤307;在在每个关键字未与任意一个第三对象信息匹配的情况下,执行步骤308。
例如,如果一个弹幕信息的关键字包括:对象A的名称(关键字1)、对象B的名称(关键字2),以此类推,对象C的名称(关键字3),在此不一一赘述。第一对象信息中包括对象信息A、对象信息B、对象信息C、对象信息D、以及对象信息E,则该弹幕信息中的关键字1与对象信息A匹配,关键字2与对象信息B匹配,对象信息A即为所有第一对象信息中的一个第三对象信息,对象信息B也为所有第一对象信息中的一个第三对象信息。
例如,结合下表2介绍将弹幕信息包括的每个关键字与所有第一对象信息进行匹 配的情况。以上述实施例中举例介绍的10个弹幕信息为例。对弹幕信息1进行语义分析后,获得弹幕信息1的关键字为关键字1;对弹幕信息2进行语义分析后,获得的弹幕信息2的关键字为关键字2;;对弹幕信息3进行语义分析后,获得的弹幕信息3的关键字为关键字2;对弹幕信息4进行语义分析后,获得弹幕信息1的关键字为关键字1;对弹幕信息5进行语义分析后,获得的弹幕信息5的关键字为关键字3(C对象的名称);对弹幕信息6进行语义分析后,获得的弹幕信息6的关键字为关键字2;对弹幕信息7进行语义分析后,获得的弹幕信息7的关键字为关键字1;对弹幕信息8进行语义分析后,获得弹幕信息8的关键字为关键字1;对弹幕信息9进行语义分析后,获得的弹幕信息9的关键字为关键字5(E对象的名称);对弹幕信息10进行语义分析后,获得的弹幕信息10的关键字为关键字6(F对象的名称)。如下表2所示,第一列为弹幕信息,第二列为弹幕信息的关键字,第三列为目标视频的所有第一对象信息。
Figure PCTCN2020120415-appb-000002
表2
第一对象信息可以为对象的名称,例如对象信息A即为对象A的名称,对象信息B即为对象B的名称。由于关键字为对象的名称,第一对象信息也为对象的名称,因此可以将关键字与第一对象信息进行匹配。由上述表2以及介绍可知,弹幕信息1的关键字1与所有第一对象信息中的对象信息A匹配;弹幕信息2的关键字2与所有第一对象信息中的对象信息B匹配;弹幕信息3的关键字2与所有第一对象信息中的对象信息B匹配;弹幕信息4的关键字1与所有第一对象信息中的对象信息A匹配;弹幕信息5的关键字3与所有第一对象信息中的对象信息C匹配;其他弹幕信息与所有第一对象信息中的一个第三对象信息是否匹配不再一一举例说明。需要说明的是,由于弹幕信息10的关键字为关键字6(对象F的名称),所有第一对象信息中不存在任何一个第三对象信息与关键字6匹配。
步骤307、服务器存储第三对象信息、以及与第三对象信息关联的弹幕信息。
由上述表2以及介绍可知,弹幕信息1的关键字1与所有第一对象信息中的对象信息A(第三对象信息)匹配,则存储对象信息A、以及对象信息A关联的弹幕信息1;同样,弹幕信息2的关键字2与所有第一对象信息中的对象信息B(第三对象信息)匹配,弹幕信息中的每个关键字是否与所有第一对象信息中的一个第三对象信息匹配不再一一举例介绍,具体可以参照如下表3中示出的存储第三对象信息、以及与第三对象信 息关联的弹幕信息。参照表3所示:
第三对象信息 与第三对象信息关联的弹幕信息
对象信息A 弹幕信息1
对象信息B 弹幕信息2
对象信息B 弹幕信息3
对象信息A 弹幕信息4
对象信息C 弹幕信息5
对象信息B 弹幕信息6
对象信息A 弹幕信息7
对象信息A 弹幕信息8
对象信息E 弹幕信息9
表3
步骤308、服务器存储无关信息、以及与无关信息关联的弹幕信息。
由于弹幕信息10包括的关键字为关键字6,而所有第一对象信息中不存在任意一个第三对象信息与关键字6匹配,因此,可以确定该弹幕信息为无关信息,即该弹幕信息是与目标图像区域中的对象信息无关的信息,则存储无关信息、以及与无关信息关联的弹幕信息10。
步骤309、服务器获取电子设备对目标图像区域进行识别后确定的识别结果。
步骤310、在识别结果与所有第一对象信息中的至少一个第一对象信息匹配的情况下,则服务器将与识别结果匹配的第一对象信息作为第二对象信息。
结合上述实施例中的举例介绍,在此仍以所有第一对象信息包括对象信息A、对象信息B、对象信息C、对象信息D、以及对象信息E为例,如果在步骤309中服务器获取的识别结果中为对象信息A,则在本步骤中与识别结果匹配的至少一个第一对象信息为所有第一对象信息中对象信息A,则将对象信息A作为第二对象信息。如果在步骤309中服务器获取的识别结果中为对象信息A、对象信息B和对象信息C,则在本步骤中与识别结果匹配的至少一个第一对象信息包括所有第一对象信息中对象信息A、对象信息B和对象信息C,则将对象信息A、对象信息B和对象信息C都作为第二对象信息。如果在步骤309中服务器获取的识别结果为对象信息F,则所有第一对象信息中不存在任何一个与对象信息F匹配的对象信息。
步骤311、在存储的弹幕信息中存在包括至少一个第二对象信息的弹幕信息的情况下,服务器将包括至少一个第二对象信息的弹幕信息作为目标弹幕信息。
其中,存储的弹幕信息包括与第三对象信息关联的弹幕信息和/或与无关信息关联的弹幕信息。
需要说明的是,由于存储了第三对象信息、以及与第三对象信息关联的弹幕信息,也即第三对象信息与弹幕信息二者关联,服务器可以直接从与弹幕信息关联第三对象信息中判断是否包括至少一个第二对象信息,如果与弹幕信息关联的第三对象信息中包括至少一个第二对象信息,则将与该第三对象信息关联的弹幕信息作为目标弹幕信息,无需服务器再识别弹幕信息中是否包括至少一个第二对象信息即可以确定出目标弹幕信息,从而在一定程度上加快识别目标弹幕信息的效率,进而可以较快的将目标 弹幕信息发送给电子设备进行展示。
例如,如果第二对象信息为对象信息A,以上述表3中的弹幕信息1为例,弹幕信息1与第三对象信息(对象信息A)关联,因此可以直接从二者的关联关系中确定第三对象信息(对象信息A)中包括与第二对象信息(对象信息A)匹配的对象信息,因此,弹幕信息1即为目标弹幕信息。无需在确定出第二对象信息后,再对弹幕信息进行语义分析,再判断出哪些弹幕信息属于目标弹幕信息,因此在一定程度上提高了识别出目标弹幕信息的效率。
具体的,可以判断存储的弹幕信息包括与第三对象信息关联的弹幕信息和/或与无关信息关联的弹幕信息是否存在包括至少一个第二对象信息的弹幕信息,如果存在包括至少一个第二对象信息的弹幕信息,则将包括至少一个第二对象信息的弹幕信息作为目标弹幕信息;如果不存在包括至少一个第二对象信息的弹幕信息,则认为未识别出目标弹幕信息。
需要说明的是,如果存储的弹幕信息全部都是与无关信息关联的弹幕信息,则服务器不会识别出目标弹幕信息,服务器可以向电子设备发送提示信息,以提示用户重新执行在目标视频上圈定一个区域。如果存储的弹幕信息包括与第三对象信息关联的弹幕信息,则可以存储的弹幕信息中识别包括至少一个第二对象信息的弹幕信息,并将识别出的包括至少一个第二对象信息的弹幕信息作为目标弹幕信息。例如,第二对象信息包括对象信息A、对象信息B和对象信息C,则根据上述表3,目标弹幕信息包括弹幕信息1至弹幕信息8。
步骤312、若第二对象信息为多个,则服务器根据每个第二对象信息,对目标弹幕信息划分类别,获得每个类别的目标弹幕信息。
例如,若第二对象信息包括对象信息A、对象信息B和对象信息C,服务器可以根据每个第二对象信息,对目标弹幕信息划分类别,获得每个类别的目标弹幕信息,如下表4所示:
Figure PCTCN2020120415-appb-000003
表4
需要说明的是,由于有些弹幕信息中可以包括多个对象信息,例如包括对象信息A、对象信息B、对象信息C,则可以将包括多个对象信息的弹幕信息与对象信息A或对象信息B或对象信息C归为一类,也可以独立于对象信息A、对象信息B和对象信息C单独归为一类,本申请对此不进行限制,本实施例中仅以弹幕信息中包括一个第二对象信息进行示例说明。
如上表4,类别1的目标弹幕信息包括弹幕信息1、弹幕信息4、弹幕信息7和弹幕信息8;类别2的目标弹幕信息包括弹幕信息2、弹幕信息3和弹幕信息6;类别3的目标弹幕信息包括弹幕信息5。
步骤313、服务器向电子设备发送每个类别的目标弹幕信息。
相应的,电子设备接收服务器发送的每个类别的目标弹幕信息;按照每个类别显示与每个类别的目标弹幕信息。
参照图5,示出了本申请实施例中提供的一种与每个类别的目标弹幕信息显示的示意图。图5中示出的A即为与对象信息A(第二对象信息)对应的对象,A右边显示的即为类别1的目标弹幕信息。由一组“★”表示一条弹幕信息,类别1的右边显示了4条目标弹幕信息(弹幕信息1、弹幕信息4、弹幕信息7和弹幕信息8),其他类别的目标弹幕信息不再一一介绍。
还可以包括如下步骤:
服务器根据每个所述第二对象信息,生成缩略图;
服务器向所述电子设备发送所述缩略图,以供所述电子设备显示所述缩略图。
需要说明的是,服务器可以在向电子设备发送每个类别的目标弹幕信息之后,向电子设备发送缩略图;也可以向电子设备发送每个类别的目标弹幕信息之前,向电子设备发送缩略图;或者向电子设备发送每个类别的目标弹幕信息时,同时向电子设备发送缩略图。本申请对服务器向电子设备发送缩略图的时机不进行限制。
相应的,电子设备接收服务器发送的缩略图,缩略图为服务器根据每个第二对象信息生成的;显示缩略图。
在显示缩略图之后,电子设备可以接收用户对缩略图的第二输入;响应于第二输入,控制每个类别的目标弹幕信息的移动方向和移动距离,或者隐藏缩略图。
例如,如图5,电子设备可以在如图5的左上角显示的缩略图501,第二输入为用户拖动缩略图501上下滑动或者向左滑动时对应的输入,拖动缩略图501向上滑动,则每个类别的目标弹幕信息可以向上滑动。若拖动缩略图501向下滑动,则每个类别的目标弹幕信息可以向下滑动。若拖动缩略图501向左滑动,则可以隐藏缩略图501。需要说明的是,每个类别的目标弹幕信息的滑动距离可以与缩略图501滑动的距离相同。通过本步骤,可以使用户控制弹幕在目标视频中显示的位置或者隐藏缩略图。
在显示缩略图之后,还可以包括如下步骤:
接收用户对缩略图的第三输入;响应于第三输入,展开缩略图中包括的与每个类别对应的对象图标;接收用户对所有对象图标中的第一目标对象图标的第四输入;响应于第四输入,调整目标对象图标在缩略图中的位置,并根据每个类别对应的对象图标在缩略图中的位置,对每个类别的目标弹幕信息进行排序,获得排序结果;显示排序结果。
参照图6,示出了本申请实施例中提供的一种对象图标展示示意图,用第三输入可以为用户点击如图5的缩略图501的操作,电子设备响应于第三输入,展开缩略图501中包括的与每个类别对应的对象图标,如图6:与类别1对应的对象图标为“A”,与类别2对应的对象图标为“B”,与类别3对应的对象图标为“C”。
参照图7,示出了本申请实施例中提供的一种目标弹幕信息排序以及突出处理的示 意图。第四输入可以为用户拖动对象图标,以改变拖动的对象图标的显示位置、以及改变与拖动的对象图标对应的类别的目标弹幕信息的显示顺序,如图7,当用户拖动对象图标A挪动到如图6的对象图标B位置后,对象图标A和对象图标B的位置互换,并且,类别1的目标弹幕信息和类别2的目标弹幕信息的位置互换。即类别1的目标弹幕信息由如图6的第一行改变为如图7的第二行,类别2的目标弹幕信息由如图6的第二行改变为排在第一行。
其中,在响应于第三输入,展开缩略图中包括的与每个类别对应的对象图标之后,还可以包括如下步骤:
接收用户对所有对象图标中的第二目标对象图标的第五输入;
响应于第五输入,确定与第二目标对象图标对应的目标类别,对目标类别的目标弹幕信息进行突出处理,并显示突出处理后的目标类别的目标弹幕信息。
第五输入可以为用户点击由缩略图中展开的对象图标的操作,如图7,当用户点击对象图标A后,电子设备可以确定与对象图标A对应的目标类别为类别1,可以对类别1的目标弹幕信息进行突出处理,并显示突出处理后的目标类别的目标弹幕信息。对目标类别的目标弹幕信息进行突出处理例如将目标弹幕信息的字体加粗、加大或改变颜色等。图7示出了将类别1的目标弹幕信息的字体进行加粗的突出处理。
参照图8,示出了本申请实施例中提供的一种服务器800的框图。服务器800包括:
第一获取模块810,用于获取电子设备对目标图像区域进行识别后确定的识别结果,所述目标图像区域为所述电子设备根据用户对目标视频的第一输入确定的,所述目标视频为所述电子设备上当前播放的视频;
确定模块820,用于根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息;
识别模块830,用于从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
发送模块840,用于向所述电子设备发送所述目标弹幕信息。
参照图9,示出了本申请实施例中提供的另一种服务器900的框图,服务器900还可以包括:
第二获取模块910,用于获取所述目标视频的弹幕信息;
分析模块920,用于对所述弹幕信息进行语义分析,获得所述弹幕信息的至少一个关键字;
匹配模块930,用于将所述弹幕信息包括的每个所述关键字与所有所述第一对象信息进行匹配;
存储模块940,用于在每个所述关键字中的一个关键字与所有所述第一对象信息中的一个第三对象信息匹配的情况下,存储所述第三对象信息、以及与所述第三对象信息关联的所述弹幕信息。
可选的,服务器900还可以包括:
去除模块950,用于在所述弹幕信息中包括符号信息的情况下,去除所述弹幕信息中的符号信息;
相应的,所述分析模块920,具体用于对去除所述符号信息的弹幕信息进行语义分 析,获得所述弹幕信息的至少一个所述关键字。
可选的,所述存储模块940,还用于在每个所述关键字未与任意一个所述第三对象信息匹配的情况下,存储无关信息、以及与所述无关信息关联的所述弹幕信息。
可选的,所述识别模块830,具体用于在存储的所述弹幕信息中存在包括至少一个所述第二对象信息的弹幕信息的情况下,将包括至少一个所述第二对象信息的弹幕信息作为所述目标弹幕信息;所述存储的所述弹幕信息包括与所述第三对象信息关联的弹幕信息和/或与所述无关信息关联的所述弹幕信息。
可选的,所述确定模块820,具体用于在所述识别结果与所有所述第一对象信息中的至少一个第一对象信息匹配的情况下,则将与所述识别结果匹配的所述第一对象信息作为所述第二对象信息。
可选的,服务器900还可以包括:
类别划分模块960,用于若所述第二对象信息为多个,则根据每个所述第二对象信息,对所述目标弹幕信息划分类别,获得每个所述类别的目标弹幕信息;
相应的,所述发送模块840,具体用于向所述电子设备发送每个所述类别的目标弹幕信息。
可选的,发送模块840,还用于根据每个所述第二对象信息,生成缩略图;向所述电子设备发送所述缩略图,以供所述电子设备显示所述缩略图。
参照图10,示出了本申请实施例中提供的一种电子设备1000的框图。电子设备1000包括:
第一接收模块1010,用于接收用户对所述电子设备上播放的目标视频的第一输入;
确定模块1020,用于响应于所述第一输入,确定所述目标视频的目标帧图像,并确定所述目标帧图像上的目标图像区域;
识别模块1030,用于对所述目标图像区域进行识别,获得识别结果,并向服务器发送所述识别结果,以供所述服务器根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息,并从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
第二接收模块1040,用于接收所述服务器发送的所述目标弹幕信息;
目标弹幕信息显示模块1050,用于显示所述目标弹幕信息。
可选的,所述确定模块1020,具体用于响应于所述第一输入,确定与所述第一输入对应的划线轨迹;
确定与所述第一输入的结束时刻对应的所述目标视频的帧图像,将与所述第一输入的结束时刻对应的所述目标视频的帧图像作为所述目标帧图像;
在所述划线轨迹为所述非闭合区域的情况下,将所述非闭合区域补充为闭合区域,并将所述目标帧图像上的所述闭合区域作为所述目标图像区域;
在所述划线轨迹不为所述非闭合区域的情况下,将所述划线轨迹组成的闭合区域作为所述目标图像区域。
可选的,所述第二接收模块1040,具体用于接收所述服务器发送的每个类别的目标弹幕信息,所述类别的目标弹幕信息为所述服务器根据每个所述第二对象信息,对所述目标弹幕信息划分类别获得的;按照每个所述类别显示与每个所述类别的目标弹 幕信息。
可选的,参照图11,示出了本申请实施例中提供的另一种电子设备1100的框图,电子设备1100还可以包括:
第三接收模块1110,用于接收所述服务器发送的缩略图,所述缩略图为所述服务器根据每个所述第二对象信息生成的;
缩略图显示模块1120,用于显示所述缩略图;
第四接收模块1130,用于接收用户对所述缩略图的第二输入;
控制模块1140,用于响应于所述第二输入,控制每个所述类别的目标弹幕信息的移动方向和移动距离,或者隐藏所述缩略图。
可选的,参照图12,示出了本申请实施例中提供的又一种电子设备1200的框图,电子设备1200还可以包括:
第五接收模块1210,用于接收所述用户对所述缩略图的第三输入;
对象图标展开模块1220,用于响应于所述第三输入,展开所述缩略图中包括的与每个所述类别对应的对象图标;
第六接收模块1230,用于接收所述用户对所有所述对象图标中的第一目标对象图标的第四输入;
排序模块1240,用于响应于所述第四输入,调整所述目标对象图标在所述缩略图中的位置,并根据每个所述类别对应的对象图标在所述缩略图中的位置,对每个所述类别的目标弹幕信息进行排序,获得排序结果;
排序结果显示模块1250,用于显示所述排序结果。
可选的,电子设备1200还可以包括:
第七接收模块1260,用于接收所述用户对所有所述对象图标中的第二目标对象图标的第五输入;
处理模块1270,用于响应于所述第五输入,确定与所述第二目标对象图标对应的目标类别,对所述目标类别的目标弹幕信息进行突出处理,并显示突出处理后的所述目标类别的目标弹幕信息。
图13为实现本申请各个实施例的一种电子设备的硬件结构示意图,
该电子设备1300包括但不限于:射频单元1301、网络模块1302、音频输出单元1303、输入单元1304、传感器1305、显示单元1306、用户输入单元1307、接口单元1308、存储器1309、处理器1310、以及电源1311等部件。本领域技术人员可以理解,图13中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本申请实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,处理器1310,用于接收用户对所述电子设备上播放的目标视频的第一输入;
响应于所述第一输入,确定所述目标视频的目标帧图像,并确定所述目标帧图像上的目标图像区域;
对所述目标图像区域进行识别,获得识别结果,并向服务器发送所述识别结果,以供所述服务器根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二 对象信息,并从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
接收所述服务器发送的所述目标弹幕信息;
显示所述目标弹幕信息。
在本申请实施例中,通过确定目标帧图像上的目标图像区域,对目标图像区域进行识别,获得识别结果,并向服务器发送所述识别结果,以供所述服务器根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息,并从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息,接收并显示服务器发送的目标弹幕信息。从而实现显示与用户感兴趣的目标图像区域的对象信息相关的目标弹幕信息,满足用户的个性化需求。
应理解的是,本申请实施例中,射频单元1301可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器1310处理;另外,将上行的数据发送给基站。通常,射频单元1301包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元1301还可以通过无线通信系统与网络和其他设备通信。
电子设备通过网络模块1302为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元1303可以将射频单元1301或网络模块1302接收的或者在存储器1309中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元1303还可以提供与电子设备1300执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元1303包括扬声器、蜂鸣器以及受话器等。
输入单元1304用于接收音频或视频信号。输入单元1304可以包括图形处理器(Graphics Processing Unit,GPU)13041和麦克风13042,图形处理器13041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元1306上。经图形处理器13041处理后的图像帧可以存储在存储器1309(或其它存储介质)中或者经由射频单元1301或网络模块1302进行发送。麦克风13042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元1301发送到移动通信基站的格式输出。
电子设备1300还包括至少一种传感器1305,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板13061的亮度,接近传感器可在电子设备1300移动到耳边时,关闭显示面板13061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器1305还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元1306用于显示由用户输入的信息或提供给用户的信息。显示单元1306 可包括显示面板13061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板13061。
用户输入单元1307可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元1307包括触控面板13071以及其他输入设备13072。触控面板13071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板13071上或在触控面板13071附近的操作)。触控面板13071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1310,接收处理器1310发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板13071。除了触控面板13071,用户输入单元1307还可以包括其他输入设备13072。具体地,其他输入设备13072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板13071可覆盖在显示面板13061上,当触控面板13071检测到在其上或附近的触摸操作后,传送给处理器1310以确定触摸事件的类型,随后处理器1310根据触摸事件的类型在显示面板13061上提供相应的视觉输出。虽然在图13中,触控面板13071与显示面板13061是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板13071与显示面板13061集成而实现电子设备的输入和输出功能,具体此处不做限定。
接口单元1308为外部装置与电子设备1300连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元1308可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备1300内的一个或多个元件或者可以用于在电子设备1300和外部装置之间传输数据。
存储器1309可用于存储软件程序以及各种数据。存储器1309可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1309可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器1310是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器1309内的软件程序和/或模块,以及调用存储在存储器1309内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器1310可包括一个或多个处理单元;可选的,处理器1310可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1310中。
电子设备1300还可以包括给各个部件供电的电源1311(比如电池),可选的,电源1311可以通过电源管理系统与处理器1313逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,电子设备1300包括一些未示出的功能模块,在此不再赘述。
可选的,本申请实施例还提供一种电子设备,包括处理器1310,存储器1309,存储在存储器1309上并可在所述处理器1310上运行的计算机程序,该计算机程序被处理器1310执行时实现上述弹幕信息识别方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述弹幕信息识别方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (15)

  1. 一种弹幕信息识别方法,所述方法包括:
    获取电子设备对目标图像区域进行识别后确定的识别结果,所述目标图像区域为所述电子设备根据用户对目标视频的第一输入确定的,所述目标视频为所述电子设备上当前播放的视频;
    根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息;
    从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
    向所述电子设备发送所述目标弹幕信息。
  2. 根据权利要求1所述的方法,其中,在所述获取电子设备对目标图像区域进行识别后确定的识别结果之前,还包括:
    获取所述目标视频的弹幕信息;
    对所述弹幕信息进行语义分析,获得所述弹幕信息的至少一个关键字;
    将所述弹幕信息包括的每个所述关键字与所有所述第一对象信息进行匹配;
    在每个所述关键字中的一个关键字与所有所述第一对象信息中的一个第三对象信息匹配的情况下,存储所述第三对象信息、以及与所述第三对象信息关联的所述弹幕信息。
  3. 根据权利要求2所述的方法,其中,在所述对所述弹幕信息进行语义分析,获得所述弹幕信息的至少一个关键字之前,还包括:
    在所述弹幕信息中包括符号信息的情况下,去除所述弹幕信息中的符号信息;
    所述对所述弹幕信息进行语义分析,获得所述弹幕信息的至少一个关键字,包括:
    对去除所述符号信息的弹幕信息进行语义分析,获得所述弹幕信息的至少一个所述关键字。
  4. 根据权利要求3所述的方法,其中,还包括:
    在每个所述关键字未与任意一个所述第三对象信息匹配的情况下,存储无关信息、以及与所述无关信息关联的所述弹幕信息。
  5. 根据权利要求4所述的方法,其中,所述从存储的弹幕信息中识别目标弹幕信息,包括:
    在存储的所述弹幕信息中存在包括至少一个所述第二对象信息的弹幕信息的情况下,将包括至少一个所述第二对象信息的弹幕信息作为所述目标弹幕信息;所述存储的所述弹幕信息包括与所述第三对象信息关联的弹幕信息和/或与所述无关信息关联的所述弹幕信息。
  6. 根据权利要求1所述的方法,其中,所述根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息,包括:
    在所述识别结果与所有所述第一对象信息中的至少一个第一对象信息匹配的情况下,则将与所述识别结果匹配的所述第一对象信息作为所述第二对象信息。
  7. 根据权利要求1所述的方法,其中,在所述向所述电子设备发送所述目标弹幕信息之前,还包括:
    若所述第二对象信息为多个,则根据每个所述第二对象信息,对所述目标弹幕信息划分类别,获得每个所述类别的目标弹幕信息;
    向所述电子设备发送所述目标弹幕信息,包括:
    向所述电子设备发送每个所述类别的目标弹幕信息。
  8. 一种弹幕信息显示方法,所述方法包括:
    接收用户对电子设备上播放的目标视频的第一输入;
    响应于所述第一输入,确定所述目标视频的目标帧图像,并确定所述目标帧图像上的目标图像区域;
    对所述目标图像区域进行识别,获得识别结果,并向服务器发送所述识别结果,以供所述服务器根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息,并从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
    接收所述服务器发送的所述目标弹幕信息;
    显示所述目标弹幕信息。
  9. 根据权利要求8所述的方法,其中,所述响应于所述第一输入,确定所述目标视频的目标帧图像,并确定所述目标帧图像上的目标图像区域,包括:
    响应于所述第一输入,确定与所述第一输入对应的划线轨迹;
    确定与所述第一输入的结束时刻对应的所述目标视频的帧图像,将与所述第一输入的结束时刻对应的所述目标视频的帧图像作为所述目标帧图像;
    在所述划线轨迹为非闭合区域的情况下,将所述非闭合区域补充为闭合区域,并将所述目标帧图像上的所述闭合区域作为所述目标图像区域;
    在所述划线轨迹不为所述非闭合区域的情况下,将所述划线轨迹组成的闭合区域作为所述目标图像区域。
  10. 根据权利要求9所述的方法,其中,所述接收所述服务器发送的所述目标弹幕信息,包括:
    接收所述服务器发送的每个类别的目标弹幕信息,所述类别的目标弹幕信息为所述服务器根据每个所述第二对象信息,对所述目标弹幕信息划分类别获得的;
    按照每个所述类别显示与每个所述类别的目标弹幕信息。
  11. 一种服务器,包括:
    第一获取模块,用于获取电子设备对目标图像区域进行识别后确定的识别结果,所述目标图像区域为所述电子设备根据用户对目标视频的第一输入确定的,所述目标视频为所述电子设备上当前播放的视频;
    确定模块,用于根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息;
    识别模块,用于从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
    发送模块,用于向所述电子设备发送所述目标弹幕信息。
  12. 一种电子设备,包括:
    第一接收模块,用于接收用户对所述电子设备上播放的目标视频的第一输入;
    确定模块,用于响应于所述第一输入,确定所述目标视频的目标帧图像,并确定所述目标帧图像上的目标图像区域;
    识别模块,用于对所述目标图像区域进行识别,获得识别结果,并向服务器发送所述识别结果,以供所述服务器根据所述识别结果,从所述目标视频包括的第一对象信息中确定第二对象信息,并从存储的弹幕信息中识别目标弹幕信息,所述目标弹幕信息包括至少一个所述第二对象信息;
    第二接收模块,用于接收所述服务器发送的所述目标弹幕信息;
    目标弹幕信息显示模块,用于显示所述目标弹幕信息。
  13. 一种服务器,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至7中任一项所述的弹幕信息识别方法的步骤。
  14. 一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求8至10中任一项所述的弹幕信息显示方法的步骤。
  15. 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的弹幕信息识别方法的步骤或权利要求8至10中任一项所述的弹幕信息显示方法的步骤。
PCT/CN2020/120415 2019-10-17 2020-10-12 弹幕信息识别方法、显示方法、服务器及电子设备 WO2021073478A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910990386.8A CN112689201B (zh) 2019-10-17 2019-10-17 弹幕信息识别方法、显示方法、服务器及电子设备
CN201910990386.8 2019-10-17

Publications (1)

Publication Number Publication Date
WO2021073478A1 true WO2021073478A1 (zh) 2021-04-22

Family

ID=75444635

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/120415 WO2021073478A1 (zh) 2019-10-17 2020-10-12 弹幕信息识别方法、显示方法、服务器及电子设备

Country Status (2)

Country Link
CN (1) CN112689201B (zh)
WO (1) WO2021073478A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395567A (zh) * 2021-06-11 2021-09-14 腾讯科技(深圳)有限公司 一种字幕展示方法和相关装置
CN114245222A (zh) * 2021-12-16 2022-03-25 网易(杭州)网络有限公司 一种弹幕展示方法、装置、电子设备和介质
CN114915832A (zh) * 2022-05-13 2022-08-16 咪咕文化科技有限公司 一种弹幕显示方法、装置及计算机可读存储介质
CN115103212A (zh) * 2022-06-10 2022-09-23 咪咕文化科技有限公司 弹幕展示方法、弹幕处理方法、装置及电子设备
CN115297355A (zh) * 2022-08-02 2022-11-04 北京奇艺世纪科技有限公司 弹幕显示方法、生成方法、装置、电子设备及存储介质
CN116193186A (zh) * 2023-01-29 2023-05-30 北京达佳互联信息技术有限公司 一种弹幕展示方法、装置、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115065874B (zh) * 2022-06-20 2024-08-20 维沃移动通信有限公司 视频播放方法、装置、电子设备和可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090204585A1 (en) * 2008-02-07 2009-08-13 Canon Kabushiki Kaisha Document management system, document management apparatus, document management method and program
CN104811816A (zh) * 2015-04-29 2015-07-29 北京奇艺世纪科技有限公司 一种为视频画面中的对象打弹幕标签的方法、装置及系统
CN105357586A (zh) * 2015-09-28 2016-02-24 北京奇艺世纪科技有限公司 视频弹幕过滤方法及装置
CN105516821A (zh) * 2015-12-14 2016-04-20 广州弹幕网络科技有限公司 弹幕筛选的方法及装置
CN105516820A (zh) * 2015-12-10 2016-04-20 腾讯科技(深圳)有限公司 一种弹幕交互方法和装置
CN105872781A (zh) * 2016-05-31 2016-08-17 武汉斗鱼网络科技有限公司 一种弹幕控制发言过滤控制方法及装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618813B (zh) * 2015-01-20 2018-02-13 腾讯科技(北京)有限公司 弹幕信息处理方法、客户端及服务平台
CN108347640A (zh) * 2017-01-22 2018-07-31 北京康得新创科技股份有限公司 基于视频的信息处理方法和装置
CN107222790A (zh) * 2017-05-22 2017-09-29 深圳市金立通信设备有限公司 一种发送弹幕的方法、终端及计算机可读存储介质
CN107645686A (zh) * 2017-09-22 2018-01-30 广东欧珀移动通信有限公司 信息处理方法、装置、终端设备及存储介质
CN107613392B (zh) * 2017-09-22 2019-09-27 Oppo广东移动通信有限公司 信息处理方法、装置、终端设备及存储介质
CN109819280A (zh) * 2017-11-22 2019-05-28 上海全土豆文化传播有限公司 弹幕展示方法及装置
CN108259968A (zh) * 2017-12-13 2018-07-06 华为技术有限公司 视频弹幕的处理方法、系统以及相关设备
CN108495168B (zh) * 2018-03-06 2021-12-03 阿里巴巴(中国)有限公司 弹幕信息的显示方法及装置
CN108632658B (zh) * 2018-03-14 2021-03-16 维沃移动通信有限公司 一种弹幕显示方法、终端
CN109739990B (zh) * 2019-01-04 2021-07-23 北京七鑫易维信息技术有限公司 信息处理方法和终端
CN110139134B (zh) * 2019-05-10 2021-12-10 青岛民航凯亚系统集成有限公司 一种个性化弹幕智能推送方法与系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090204585A1 (en) * 2008-02-07 2009-08-13 Canon Kabushiki Kaisha Document management system, document management apparatus, document management method and program
CN104811816A (zh) * 2015-04-29 2015-07-29 北京奇艺世纪科技有限公司 一种为视频画面中的对象打弹幕标签的方法、装置及系统
CN105357586A (zh) * 2015-09-28 2016-02-24 北京奇艺世纪科技有限公司 视频弹幕过滤方法及装置
CN105516820A (zh) * 2015-12-10 2016-04-20 腾讯科技(深圳)有限公司 一种弹幕交互方法和装置
CN105516821A (zh) * 2015-12-14 2016-04-20 广州弹幕网络科技有限公司 弹幕筛选的方法及装置
CN105872781A (zh) * 2016-05-31 2016-08-17 武汉斗鱼网络科技有限公司 一种弹幕控制发言过滤控制方法及装置

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395567A (zh) * 2021-06-11 2021-09-14 腾讯科技(深圳)有限公司 一种字幕展示方法和相关装置
CN113395567B (zh) * 2021-06-11 2022-07-05 腾讯科技(深圳)有限公司 一种字幕展示方法和相关装置
CN114245222A (zh) * 2021-12-16 2022-03-25 网易(杭州)网络有限公司 一种弹幕展示方法、装置、电子设备和介质
CN114915832A (zh) * 2022-05-13 2022-08-16 咪咕文化科技有限公司 一种弹幕显示方法、装置及计算机可读存储介质
CN114915832B (zh) * 2022-05-13 2024-02-23 咪咕文化科技有限公司 一种弹幕显示方法、装置及计算机可读存储介质
CN115103212A (zh) * 2022-06-10 2022-09-23 咪咕文化科技有限公司 弹幕展示方法、弹幕处理方法、装置及电子设备
CN115103212B (zh) * 2022-06-10 2023-09-05 咪咕文化科技有限公司 弹幕展示方法、弹幕处理方法、装置及电子设备
CN115297355A (zh) * 2022-08-02 2022-11-04 北京奇艺世纪科技有限公司 弹幕显示方法、生成方法、装置、电子设备及存储介质
CN115297355B (zh) * 2022-08-02 2024-01-23 北京奇艺世纪科技有限公司 弹幕显示方法、生成方法、装置、电子设备及存储介质
CN116193186A (zh) * 2023-01-29 2023-05-30 北京达佳互联信息技术有限公司 一种弹幕展示方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN112689201B (zh) 2022-08-26
CN112689201A (zh) 2021-04-20

Similar Documents

Publication Publication Date Title
WO2021073478A1 (zh) 弹幕信息识别方法、显示方法、服务器及电子设备
WO2021213496A1 (zh) 消息显示方法及电子设备
US20220365641A1 (en) Method for displaying background application and mobile terminal
WO2021077897A1 (zh) 文件发送方法、装置和电子设备
WO2019174629A1 (zh) 图像处理方法及柔性屏终端
CN107943390B (zh) 一种文字复制方法及移动终端
WO2021233293A1 (zh) 笔记记录方法及电子设备
US20220353225A1 (en) Method for searching for chat information and electronic device
WO2021136159A1 (zh) 截屏方法及电子设备
WO2020011077A1 (zh) 通知消息显示方法及终端设备
WO2020258934A1 (zh) 界面显示方法及终端设备
CN109561211B (zh) 一种信息显示方法及移动终端
WO2021036553A1 (zh) 图标显示方法及电子设备
WO2020182035A1 (zh) 图像处理方法及终端设备
WO2020238938A1 (zh) 信息输入方法及移动终端
WO2020233323A1 (zh) 显示控制方法、终端设备及计算机可读存储介质
WO2020220873A1 (zh) 图像显示方法及终端设备
WO2020181945A1 (zh) 标识显示方法及终端设备
WO2020220893A1 (zh) 截图方法及移动终端
CN108646960B (zh) 一种文件处理方法及柔性屏终端
WO2021104175A1 (zh) 信息的处理方法及装置
CN110544287B (zh) 一种配图处理方法及电子设备
CN109753202B (zh) 一种截屏方法和移动终端
WO2021017730A1 (zh) 截图方法及终端设备
WO2020024788A1 (zh) 文字输入方法和终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20876624

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20876624

Country of ref document: EP

Kind code of ref document: A1