CN110944224A - Video playing method and electronic equipment - Google Patents

Video playing method and electronic equipment Download PDF

Info

Publication number
CN110944224A
CN110944224A CN201911203666.6A CN201911203666A CN110944224A CN 110944224 A CN110944224 A CN 110944224A CN 201911203666 A CN201911203666 A CN 201911203666A CN 110944224 A CN110944224 A CN 110944224A
Authority
CN
China
Prior art keywords
video
target
input
user
playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911203666.6A
Other languages
Chinese (zh)
Other versions
CN110944224B (en
Inventor
陈禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911203666.6A priority Critical patent/CN110944224B/en
Publication of CN110944224A publication Critical patent/CN110944224A/en
Application granted granted Critical
Publication of CN110944224B publication Critical patent/CN110944224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The video playing method and the electronic device provided by the embodiment of the invention are applied to the technical field of communication, and are used for solving the problem that a user is difficult to check specific video contents in the related technology. The method comprises the following steps: the method comprises the following steps that first electronic equipment receives first input of a user on a first bullet screen in a state of playing a target video, and the first bullet screen comprises: a first playing time node and first picture content information of a first video frame of the target video; and responding to the first input, and jumping to the first video frame to play the video content of the target video according to the first playing time node.

Description

Video playing method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video playing method and electronic equipment.
Background
At present, most video playing applications have a function of commenting on videos (i.e. a barrage function). Generally, after a video playing application starts a bullet screen function, the electronic device displays comments (i.e., bullet screens) posted by a user with respect to video content in a video picture in an overlapping manner, and displays the comments on the video picture in a horizontal drifting or hovering manner, so as to share the comments with other users watching the video.
In the related art, when a user is interested in a certain barrage displayed in a video playing interface, the barrage is usually clicked, so that the barrage is suspended in the video playing interface, and the user can conveniently view related video information contained in the barrage.
However, since the user still plays the video in the video playing interface during viewing the bullet screen, it is difficult for the user to find the specific video content mentioned in the bullet screen after exiting the bullet screen.
Disclosure of Invention
The video playing method and the electronic device provided by the embodiment of the invention solve the problem that a user can hardly view specific video content in the related technology.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, a video playing method provided in an embodiment of the present invention is applied to a first electronic device, and the method includes: receiving a first input of a user to a first barrage in a state of playing a target video, wherein the first barrage comprises: a first playing time node and first picture content information of a first video frame of a target video; and responding to the first input, and jumping to the first video frame to play the video content of the target video according to the first play time node.
In a second aspect, an embodiment of the present invention further provides a first electronic device, where the first electronic device includes: the receiving module is used for receiving a first input of a user to a first bullet screen in a state of playing a target video, and the first bullet screen comprises: a first playing time node and first picture content information of a first video frame of the target video; and the execution module responds to the first input received by the receiving module, and skips to the first video frame to play the video content of the target video according to the first playing time node.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the video playing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the video playing method according to the first aspect.
In the embodiment of the present invention, when the first electronic device plays the target video, after receiving a first input to the first barrage from the user, the first electronic device may directly jump to the first video frame to play the video content of the target video based on a first play time node and first picture content information, because the first barrage includes the first play time node and the first picture content information of the first video frame of the target video, so that the user can directly view the specific video content of interest through the barrage.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a video playing method according to an embodiment of the present invention;
fig. 3 is a schematic view of an interface applied by the video playing method according to the embodiment of the present invention;
fig. 4 is a second schematic diagram of an interface applied by the video playing method according to the embodiment of the present invention;
fig. 5 is a third schematic view of an interface applied by the video playing method according to the embodiment of the present invention;
fig. 6 is a fourth schematic view of an interface applied by the video playing method according to the embodiment of the present invention;
fig. 7 is a fifth schematic view of an interface applied by the video playing method according to the embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
It should be noted that "a plurality" herein means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
It should be noted that, for the convenience of clearly describing the technical solutions of the embodiments of the present invention, in the embodiments of the present invention, words such as "first" and "second" are used to distinguish the same items or similar items with substantially the same functions or actions, and those skilled in the art can understand that the words such as "first" and "second" do not limit the quantity and execution order. For example, the first input and the second input are for distinguishing different inputs, rather than for describing a particular order of inputs.
An execution main body of the video playing method provided in the embodiment of the present invention may be the first electronic device, or may also be a functional module and/or a functional entity capable of implementing the video playing method in the first electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited.
The first electronic device and the second electronic device in the embodiment of the present invention may be terminal devices, and the terminal devices may be mobile terminal devices or non-mobile terminal devices. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the video playing method provided by the embodiment of the present invention, taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the video playing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the video playing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the electronic device can implement the video playing method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes a video playing method according to an embodiment of the present invention with reference to a flow chart of the video playing method shown in fig. 2, where fig. 2 is a schematic flow chart of the video playing method provided in the embodiment of the present invention, and the method includes steps 201 to 202:
step 201: the first electronic equipment receives first input of a user to the first bullet screen in the state of playing the target video.
In an embodiment of the present invention, the first bullet screen includes: and the first playing time node and the first picture content information of the first video frame of the target video.
Optionally, in an embodiment of the present invention, the first screen content information includes: a first image of a target object in a first video frame, wherein the first input is an input to the first image.
Therefore, the first bullet screen directly contains the image of the target object, so that a user can quickly view the bullet screen related to the target object, and then directly jump to the video frame corresponding to the image by touching the image of the target object contained in the first bullet screen.
For example, the first image may be a screenshot of the target object in the first video frame.
Illustratively, the first barrage is used to indicate a barrage of a first video frame in the target video.
Illustratively, the first bullet screen may contain at least one of the following information: and sending information such as the account name, the nickname, the head portrait and the like of the user of the second electronic equipment of the first bullet screen.
For example, the first screen content information may be: information of a video object contained in a video picture of a first video frame of a target video, wherein the video object comprises at least one of: humans, animals, plants, buildings, appliances, etc.
In an embodiment of the present invention, the first input specifically includes: specifically, the click input by the user to the first bullet screen, or the slide input by the user to the first bullet screen, or other feasibility inputs by the user to the first bullet screen may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
For example, the click input may be a single click input, a double click input, or any number of click inputs; the click input may be a long-press input or a short-press input. The sliding input may be a sliding input in any direction, for example, sliding upwards, sliding downwards, sliding leftwards or sliding rightwards, and the sliding trajectory of the sliding input may be a straight line or a curved line, and may be specifically set according to actual requirements.
Step 202: and responding to the first input, and jumping to the first video frame to play the video content of the target video by the first electronic equipment according to the first playing time node.
For example, in a state that a target video is played, if a first input of a user for a first barrage is received, the first electronic device may extract video frame information of a first video frame carried by the first electronic device from the first barrage (for example, a first play time node and first picture content information of the first video frame of the target video), and then immediately jump the target video to the first play time node for playing.
For example, taking the first electronic device as a mobile phone as an example, when the mobile phone plays "video 1", as shown in (a) of fig. 3, the video content currently played by the video playing interface (i.e. 31 in fig. 3) of "video 1" is video content 1, and the "video playing interface 31 also displays bullet screen 1, bullet screen 2, and bullet screen 3, and the playing time node of video content 1 is time node 1 (e.g. 32 in the time progress bar in (a) of fig. 3), when the user is interested in the content contained in bullet screen 1, and the bullet screen 1 (i.e. the above-mentioned first bullet screen, e.g. 33 in (a) of fig. 3) carries time node 2 (i.e. the above-mentioned first playing time node, e.g. 35 in the time progress bar in (b) of fig. 3) of video content 2 (e.g. the above-mentioned first video frame, e.g. 34 in (b) of fig. 3), thus, when the user clicks on the bullet screen 1 (i.e., the first input described above), "video 1" jumps from time node 1 to time node 2.
In the video playing method provided by the embodiment of the present invention, when the first electronic device plays the target video, after receiving the first input of the user to the first barrage, the first electronic device includes the first playing time node of the first video frame of the target video and the first picture content information, so that the first electronic device can directly jump to the first video frame to play the video content of the target video based on the first playing time node, and thus the user can directly view the specific video content of interest through the barrage.
Optionally, in the embodiment of the present invention, in the state of playing the target video, the first electronic device may further cause other electronic devices to jump to the specific video content for playing by sending the bullet screen.
Illustratively, the video playing method provided by the embodiment of the present invention further includes the following steps a1 and a 2:
step A1: the first electronic device receives a second input.
Step A2: and responding to the second input, and outputting the second barrage by the first electronic equipment.
Illustratively, the second bullet screen comprises: a second playing time node of a second video frame of the target video and second picture content information of the second video frame; and the second bullet screen is used for indicating the second electronic equipment to jump to the second video frame to play the video content of the target video.
Illustratively, the second input specifically includes: specifically, the click input of the user on the playing interface, or the sliding input of the user on the playing interface, or the specific gesture input of the user on the playing interface, or other feasible inputs of the user on the playing interface may be determined according to actual use requirements, which is not limited in the embodiment of the present invention.
For example, the above click input and the above slide input can refer to the description in the above first input, and are not described herein again.
Therefore, when the first electronic device plays the target video, if the user is interested in a certain video picture in the target video, the user of other electronic devices can directly jump to the video picture by sending the bullet screen containing the video picture information.
Optionally, in an embodiment of the present invention, before the step a1, the method further includes a step A3 and a step a 4:
step A3: the first electronic device receives a third input from the user.
Step A4: and responding to the third input, the first electronic equipment pauses playing the target video and stores a second playing time node and second picture content information of the second video frame.
And when the playing of the target video is paused, the second video frame is a video frame displayed in a video playing picture of the target video.
Illustratively, the third input specifically includes: specifically, the click input of the user on the playing interface, or the sliding input of the user on the playing interface, or the specific gesture input of the user on the playing interface, or other feasible inputs of the user on the playing interface may be determined according to actual use requirements, which is not limited in the embodiment of the present invention.
For example, in the case of playing the target video, the first electronic device, upon receiving the third input, stores the video frame information (e.g., the second playing time node and the second screen content information of the second video frame) of the second video frame into the target folder. In this way, since the first electronic device stores one or more pieces of video frame information of the target video, when the user wants to send the bullet screen, the user can read the video frame information of at least one video frame from the target folder through the second input and send the bullet screen.
For example, the target folder may be a temporary folder dedicated to storing video frame information of video frames in the target video, or a dedicated folder dedicated to pre-storing picture information and playing time nodes of video frames captured by the user in the first electronic device.
Illustratively, the user may add, delete, edit video frame information of video frames stored in the target folder, periodically or aperiodically (e.g., user active triggers).
Illustratively, the third input is a sliding input of the user on the video playing screen of the target video; the second screen content information includes: and the sliding track of the sliding input is image content in a closed area enclosed on the second video frame.
For example, as shown in fig. 4, taking the electronic device as a mobile phone as an example, the mobile phone plays "video 1", and assuming that the video content currently played on the video playing interface (e.g. 41 in fig. 4) of "video 1" is video content 3 (i.e. the second video frame), if the user wants to save the video information of video content 3, a circle may be drawn on video playing interface 41 (i.e. the third input described above), so as to trigger the mobile phone to save the picture content information (i.e. the picture content in the circle drawn by the user, e.g. the "child and dog" in the user's finger circle in fig. 4) and the playing time node of video content 3. It should be noted that, when the user circles the video playback interface 41, the video playback interface 41 of "video 1" is in a pause state.
For example, the third input is described by taking a specific gesture as an example, and the specific gesture may be a gesture preset by the first electronic device or a gesture customized by the user, for example: an L-shaped gesture, a V-shaped gesture, a Z-shaped gesture, a circle-drawing gesture, etc.
For example, when the user wants to save the video frame information of the second video frame, the user may scratch a first specific gesture on the second video frame (the first specific gesture is used to trigger the first electronic device to select a specific pause frame), so that the first electronic device may highlight the frame content selected by the first specific gesture after detecting the first specific gesture, and store the selected frame content and the related frame information.
Illustratively, the third input includes a first specific gesture and a second specific gesture. The first specific gesture is used for triggering the first electronic device to select a specific video frame in the video playing interface, and the second specific gesture is used for triggering the first electronic device to store video frame information of the video frame selected by the user. In one example, the first electronic device, after detecting the first specific gesture, may preview screen content of a video frame selected by the first specific gesture. At this time, if the user wants to save the video frame information of the video frame selected by the first specific gesture, the first electronic device may draw the second specific gesture, and store the video frame information of the video frame selected by the first specific gesture after detecting the second specific gesture.
For example, as shown in fig. 5 (a), taking the electronic device as a mobile phone as an example, the mobile phone plays "video 1", and assuming that the video content currently played on the video playing interface (e.g. 51 in fig. 5) of "video 1" is video content 3 (i.e. the second video frame), if the user wants to save the information of the video content 3, a circle may be drawn on the video playing interface 51 (i.e. the first specific gesture described above), so that the mobile phone selects the video content 3.
Next, as shown in fig. 5 (b), the video content 3 in the circle drawn by the user's finger (e.g., "child and cat" in the circle drawn by the user's finger in fig. 5) is highlighted on the video playing interface 51. At this time, if the user satisfies the video content 3 and wants to save the video content 3, a V-shaped gesture (i.e., the second specific gesture) may be drawn on the video playing interface 51, and the screenshot and the playing time node of the video content 3 may be stored in the temporary folder.
Finally, as shown in (c) in fig. 5, in the playing process of "video 1" played by the mobile phone, when the user needs to send a bullet screen, the user may draw an inverted V-shaped gesture (i.e., the second input) in the video playing interface 51 of "video 1", so as to send out the picture content information and the playing time node of the video content 3 stored in the temporary folder in the form of a bullet screen, and thus, other users watching the "video 1" can jump to the video content 3 through the bullet screen to play the "video 1".
Therefore, the user can intercept and store the video content interested by the user through the specific gesture and send the video content in the form of the bullet screen, so that the user can perform more interactive communication according to the bullet screen when watching the target video.
Optionally, in this embodiment of the present invention, the first electronic device may further establish a group to perform group chat.
Illustratively, the video playing method provided by the embodiment of the present invention further includes the following steps B1 to B3:
step B1: the first electronic equipment acquires N barrages in a video playing picture of the target video.
Step B2: and the first electronic equipment sends a first request message for joining a target group to the sending equipment of the M barrages when M barrages in the N barrages include the same picture content information of the target object.
Step B3: and the first electronic equipment creates the target group under the condition of receiving a second request message which is sent by at least two sending devices and agrees to join the target group.
Wherein N, M is a positive integer and N is greater than or equal to M.
Illustratively, each of the M barrages includes a video screenshot of the same target object. That is, the above-mentioned picture content information may be a video screenshot.
Illustratively, the first electronic device sends a group establishment request to the background server to request to create the target group in case of receiving a second request message for agreeing to join the target group sent by at least two sending devices.
Illustratively, in a state where a target video is played by a first electronic device, the first electronic device automatically collects video screenshots of bullet screens within a predetermined time interval, and if it is detected that most of the bullet screens include video screenshots of the same object, sends a first request message for joining a target group to sending devices of the most of the bullet screens, so as to request the sending devices to join the target group.
For example, taking the first electronic device as the mobile phone 1 and the sending device as the mobile phone 2 as an example, the mobile phone 1 is in a state of playing "video 1". First, as shown in fig. 6 (a), when the mobile phone 1 detects that the barrage 4, the barrage 5, and the barrage 6 displayed in the video playing interface of the "video 1" (e.g. 61 in fig. 6 (a)) in a predetermined time period all include the video screenshot of "child and dog", the group entry invitation (i.e. the first request message) may be sent to the mobile phone 2 of the barrage 4, the barrage 5, and the barrage 6, and at this time, a prompt message is displayed on the video playing interface of the "video 1" of the mobile phone 2 to prompt the user whether to join the drama discussion group. If the mobile phone 2 sending the barrage 4 and the barrage 5 sends an entering response (i.e. the second request message) agreeing to join the storyline discussion group, the mobile phone 1 creates the storyline discussion group (the target group).
Next, if the user corresponding to the mobile phone 2 sending the barrage 4 is the user a, and the user corresponding to the mobile phone 2 sending the barrage 5 is the user B, as shown in (B) in fig. 6, the group member list of the storyline discussion group is displayed in the video playing interface 61 (as shown in 62 in (B) in fig. 6). Wherein, the group members in the plot discussion group include: user a and user B. At this time, the video playing interface 61 displays the barrage sent by all group members in the plot discussion group. Meanwhile, the barrage sent in the plot group is also displayed in the video playing interface 61.
Therefore, the users sending the barrage aiming at the picture content of the same target object are gathered together in a group building mode, and the users can communicate conveniently.
Illustratively, the step B1 further includes the following steps: and the first electronic equipment receives a fourth input of the user for the third barrage, and sends a first request message for joining the target group to the sending equipment of the M barrages in response to the fourth input under the condition that M barrages in the N barrages include the picture content information of the target object. The N barricades include a third barricade, and the target object is an object included in the third barricade.
In an example, in a state that the first electronic device plays the target video, the user may further trigger a bullet screen retrieval function of the first electronic device by clicking or long-pressing any bullet screen (i.e., the third bullet screen) in a video playing interface of the target video, and the first electronic device searches all bullet screens (i.e., the M bullet screens) that contain the same picture content information of the target object as the any bullet screen within a preset time period, with the any bullet screen as an index. If a plurality of barrages are found, the first electronic device sends a first request message to sending devices sending the barrages so as to request the sending devices to join the target group. It should be noted that the predetermined time period is all or part of the playing time length of the target video, and the predetermined time period may be flexibly set according to the actual application, which is not limited in the embodiment of the present invention.
For example, after the target group is successfully established, the first electronic device may display a group member list of the target group in an overlay manner on a target position (e.g., top, bottom, left, right, etc.) of the video playing interface, or may pop up or hide the group member list of the target group at the target position. In addition, the size, the position, and the like of the group member list of the target group and/or the discussion interface of the target group may be specifically set according to actual requirements, which is not limited in the embodiment of the present invention.
Further optionally, in the embodiment of the present invention, the video playing method provided in the embodiment of the present invention further includes the following step C1:
step C1: in the group discussion content display area of the target group, the first electronic device displays the discussion content sent by all group members in the target group in a bullet screen mode.
Illustratively, the group members in the target group communicate discussion content in the group discussion content display area, and the first electronic device synchronizes to the barrage of the target video in a barrage form and displays the barrage in the play interface of the target video.
For example, the group discussion content display area of the target group may be all or part of the display area of the video playing interface of the target video. Or, a discussion content display window is displayed on the video playing interface of the target video, and a display area of the discussion content display window is the group discussion content display area.
Illustratively, the first electronic device displays a barrage sent by all group members in the target group on a playing interface of the target video, or when the user publishes the discussion content on a group discussion content display area of the target group, the target video does not pause or jump with the operation, and still plays according to a normal progress.
Therefore, all group members in the target group can see the discussion content in the group discussion content display area at the first time, so that the video interactivity is increased, and the purpose of making friends through videos is achieved.
Illustratively, after the step B3, the method further includes the following steps D1 and D2:
step D1: the first electronic device receives a fifth input of the user for a target user in the target group.
Step D2: and responding to the fifth input, and if the target user agrees to the private chat, displaying a private chat interface with the target user by the first electronic equipment.
The target user is at least one of all group members in the target group.
Illustratively, the fourth input specifically includes: the click input of the user to the second identifier, or the slide input of the user to the second identifier, or other feasible inputs of the user to the second identifier may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The second identifier is a user identifier of the target user, for example, an avatar, a nickname, and the like of the target user.
For example, the above click input and the above slide input can refer to the description in the above first input, and are not described herein again.
For example, after receiving the fifth input, the first electronic device sends a private chat request to the target electronic device corresponding to the target user, and after the target user agrees to the private chat, the first electronic device displays a private chat interface with the target user in the current target group, at this time, the user corresponding to the first electronic device can perform one-to-one or one-to-many communication with the target user, and the communication content is only displayed in the private chat interface and is not sent out in a pop-up manner.
Illustratively, the private chat interface and the group chat interface of the target group can be switched with each other. For example, the switching can be performed by clicking the group chat identifier of the target group, and the switching can also be performed by clicking the non-overlapped area of the group chat interface and the private chat interface.
Illustratively, the user can perform a one-to-one chat with the target user in the private chat interface, and the chat content can be seen only by the user and the target user and is not presented in a bullet screen manner.
For example, referring to fig. 6 (B), when the user wants to perform a private chat with the user B in the storyline discussion group, the user may click on the user identifier of the user B in the group member list 62 (i.e., the fifth input mentioned above), and the first electronic device may send a private chat request to the user B. When the user B accepts the private chat request, as shown in fig. 7, a chat interface (e.g. 72 in fig. 7) for performing private chat with the user B appears in the scenario discussion group displayed on the current video playing interface 71, at this time, the user can perform one-to-one communication with the user B in the chat interface, and the chat content in the chat interface can only be seen by the user in the private chat interface.
Therefore, users with interests being thrown are gathered together in a mode of establishing the target group, so that the users can communicate with all users in the target group, one-to-one private chat can be established with the users in the target group, and privacy of private chat contents is further guaranteed.
Fig. 8 is a schematic diagram of a possible structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 8, a first electronic device 800 includes: a receiving module 801 and an executing module 802, wherein:
a receiving module 801, configured to receive a first input of a first barrage from a user in a state of playing a target video, where the first barrage includes: and the first playing time node and the first picture content information of the first video frame of the target video.
The executing module 802, in response to the first input received by the receiving module 801, jumps to the first video frame to play the video content of the target video according to the first play time node.
Optionally, the first screen content information includes: a first image of a target object in the first video frame; wherein the first input is an input to a first image.
Optionally, as shown in fig. 8, the first electronic device 800 further includes: an output module 803, wherein: a receiving module 801, configured to receive a second input in a state where the target video is played; an output module 803, configured to output a second bullet screen in response to the second input received by the receiving module 801; wherein, above-mentioned second bullet curtain includes: a second playing time node of a second video frame of the target video and second picture content information of the second video frame; and the second bullet screen is used for indicating the second electronic equipment to jump to the second video frame to play the video content of the target video.
Optionally, as shown in fig. 8, the first electronic device 800 further includes: a storage module 804, wherein: a receiving module 801, configured to receive a third input from the user; an executing module 802, configured to pause playing the target video in response to the third input received by the receiving module 801; the storage module 804 is configured to store a second playing time node and second picture content information of a second video frame; and when the playing of the target video is paused, the second video frame is a video frame displayed in a video playing picture of the target video.
Optionally, the third input is a sliding input of the user on a video playing picture of the target video; the second screen content information includes: and the sliding track of the sliding input is image content in a closed area enclosed on the second video frame.
Optionally, as shown in fig. 8, the first electronic device 800 further includes: an obtaining module 805, a sending module 806, and a creating module 807, wherein: an obtaining module 805, configured to obtain N barrages in a video playing picture of the target video; a sending module 806, configured to send a first request message for joining a target group to a sending device of M barrages when M barrages in the N barrages obtained by the obtaining module 805 include the same picture content information of the target object; the creating module 807 creates the target group upon receiving a second request message for agreeing to join the target group, which is sent by at least two sending devices.
Optionally, as shown in fig. 8, the first electronic device 800 further includes: a display module 808, wherein: a display module 808, configured to display the discussion content sent by all group members in the target group in a bullet screen form in the group discussion content display area of the target group.
In the first electronic device provided in the embodiment of the present invention, when the first electronic device plays the target video, after receiving a first input of the user to the first barrage, the first electronic device may directly jump to the first video frame to play the video content of the target video based on the first play time node because the first barrage includes the first play time node and the first picture content information of the first video frame of the target video, so that the user can directly view the specific video content of interest through the barrage.
The first electronic device provided in the embodiment of the present invention is capable of implementing each process implemented by the first electronic device in the foregoing method embodiments, and is not described here again to avoid repetition.
It should be noted that, as shown in fig. 8, modules that are necessarily included in the first electronic device 800 are illustrated by solid line boxes, such as an execution module 802; modules that may or may not be included in the first electronic device 800 are illustrated with dashed boxes, such as the sending module 806.
As shown in fig. 9, taking a first electronic device as a terminal device as an example, and fig. 9 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the configuration of the terminal device 100 shown in fig. 9 does not constitute a limitation of the terminal device, and that the terminal device 100 may include more or less components than those shown, or combine some components, or arrange different components. In the embodiment of the present invention, the terminal device 100 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The user input unit 107 is configured to receive a first input of a first bullet screen from a user in a state of playing a target video, where the first bullet screen includes: a first playing time node and first picture content information of a first video frame of the target video; the processor 110, in response to the first input received by the user input unit 107, jumps to the first video frame to play the video content of the target video according to the first play time node.
In the terminal device provided by the embodiment of the present invention, after the terminal device receives a first input of a first barrage from a user when playing a target video, the first barrage includes a first playing time node and first picture content information of a first video frame of the target video, so that the first electronic device can directly jump to the first video frame to play video content of the target video based on the first playing time node, and thus the user can directly view specific video content of interest through the barrage.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device 100 provides the user with wireless broadband internet access via the network module 102, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device 100. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 9, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device 100, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device 100, connects various parts of the entire terminal device 100 by various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device 100. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor 110, where the computer program, when executed by the processor, implements each process of the video playing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A video playing method is applied to a first electronic device, and is characterized by comprising the following steps:
receiving a first input of a user to a first barrage in a state of playing a target video, wherein the first barrage comprises: a first playing time node and first picture content information of a first video frame of the target video;
and responding to the first input, and jumping to the first video frame to play the video content of the target video according to the first playing time node.
2. The method of claim 1, wherein the first picture content information comprises: a first image of a target object in the first video frame;
wherein the first input is an input to the first image.
3. The method according to claim 1, wherein in a state where the target video is played, the method further comprises:
receiving a second input;
outputting a second barrage in response to the second input;
wherein, the second bullet curtain includes: a second playing time node of a second video frame of the target video and second picture content information of the second video frame; and the second bullet screen is used for indicating the second electronic equipment to jump to the second video frame to play the video content of the target video.
4. The method of claim 3, wherein prior to receiving the second input, the method further comprises:
receiving a third input of the user;
in response to the third input, pausing the playing of the target video and storing a second playing time node and second picture content information of a second video frame;
and the second video frame is a video frame displayed in a video playing picture of the target video when the target video is paused to be played.
5. The method of claim 4, wherein the third input is a sliding input by a user on a video playback screen of the target video;
the second screen content information includes: and the sliding track of the sliding input is image content in a closed area enclosed on the second video frame.
6. The method of claim 1, further comprising:
acquiring N barrages in a video playing picture of the target video;
under the condition that M of the N barrages include the same picture content information of the target object, sending a first request message for joining a target group to sending equipment of the M barrages;
and under the condition of receiving a second request message which is sent by at least two sending devices and agrees to join the target group, creating the target group.
7. The method of claim 6, wherein after the creating of the target group, the method further comprises:
and displaying the discussion contents sent by all group members in the target group in a bullet screen mode in the group discussion content display area of the target group.
8. A first electronic device, wherein the first electronic device comprises:
a receiving module, configured to receive a first input of a first barrage from a user in a state of playing a target video, where the first barrage includes: a first playing time node and first picture content information of a first video frame of the target video;
and the execution module responds to the first input received by the receiving module, and skips to the first video frame to play the video content of the target video according to the first playing time node.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video playback method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the video playback method according to one of claims 1 to 7.
CN201911203666.6A 2019-11-29 2019-11-29 Video playing method and electronic equipment Active CN110944224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911203666.6A CN110944224B (en) 2019-11-29 2019-11-29 Video playing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911203666.6A CN110944224B (en) 2019-11-29 2019-11-29 Video playing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110944224A true CN110944224A (en) 2020-03-31
CN110944224B CN110944224B (en) 2021-11-30

Family

ID=69909501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911203666.6A Active CN110944224B (en) 2019-11-29 2019-11-29 Video playing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110944224B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367561A (en) * 2020-10-27 2021-02-12 南京维沃软件技术有限公司 Barrage display method and device, electronic equipment and storage medium
CN112770129A (en) * 2020-12-31 2021-05-07 深圳市镜玩科技有限公司 Live broadcast-based group chat establishing method, related device, equipment and medium
CN112822559A (en) * 2021-01-05 2021-05-18 广州视源电子科技股份有限公司 Bullet screen data processing method and processing device
CN113556277A (en) * 2021-07-16 2021-10-26 网易(杭州)网络有限公司 Message processing method and device, nonvolatile storage medium and electronic device
CN113905125A (en) * 2021-09-08 2022-01-07 维沃移动通信有限公司 Video display method and device and electronic equipment
WO2022042763A1 (en) * 2020-08-28 2022-03-03 荣耀终端有限公司 Video playback method, and device
CN115103054A (en) * 2021-09-22 2022-09-23 维沃移动通信(杭州)有限公司 Information processing method, information processing apparatus, electronic device, and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104661096A (en) * 2013-11-21 2015-05-27 深圳市快播科技有限公司 Video barrage adding method and device, video playing method and video player
CN105516821A (en) * 2015-12-14 2016-04-20 广州弹幕网络科技有限公司 Method and device for screening bullet screen
CN105847995A (en) * 2016-05-16 2016-08-10 上海幻电信息科技有限公司 Method for video position jumping via bullet screen anchor points
CN106412622A (en) * 2016-11-14 2017-02-15 百度在线网络技术(北京)有限公司 Method and apparatus for displaying barrage information during video content playing process
CN107948760A (en) * 2017-11-30 2018-04-20 上海哔哩哔哩科技有限公司 Barrage control method for playing back, server and barrage broadcasting control system
US20180191987A1 (en) * 2017-01-04 2018-07-05 International Business Machines Corporation Barrage message processing
CN110087117A (en) * 2019-04-26 2019-08-02 维沃移动通信有限公司 A kind of video broadcasting method and terminal
CN110248258A (en) * 2019-07-18 2019-09-17 腾讯科技(深圳)有限公司 Recommended method, device, storage medium and the computer equipment of video clip
US20190320217A1 (en) * 2018-04-17 2019-10-17 Boe Technology Group Co., Ltd. Method and device for pushing a barrage, and electronic device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104661096A (en) * 2013-11-21 2015-05-27 深圳市快播科技有限公司 Video barrage adding method and device, video playing method and video player
CN105516821A (en) * 2015-12-14 2016-04-20 广州弹幕网络科技有限公司 Method and device for screening bullet screen
CN105847995A (en) * 2016-05-16 2016-08-10 上海幻电信息科技有限公司 Method for video position jumping via bullet screen anchor points
CN106412622A (en) * 2016-11-14 2017-02-15 百度在线网络技术(北京)有限公司 Method and apparatus for displaying barrage information during video content playing process
US20180191987A1 (en) * 2017-01-04 2018-07-05 International Business Machines Corporation Barrage message processing
CN107948760A (en) * 2017-11-30 2018-04-20 上海哔哩哔哩科技有限公司 Barrage control method for playing back, server and barrage broadcasting control system
US20190320217A1 (en) * 2018-04-17 2019-10-17 Boe Technology Group Co., Ltd. Method and device for pushing a barrage, and electronic device
CN110087117A (en) * 2019-04-26 2019-08-02 维沃移动通信有限公司 A kind of video broadcasting method and terminal
CN110248258A (en) * 2019-07-18 2019-09-17 腾讯科技(深圳)有限公司 Recommended method, device, storage medium and the computer equipment of video clip

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒲骊衡: "弹幕迷群身份认同研究", 《文化创新比较研究》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022042763A1 (en) * 2020-08-28 2022-03-03 荣耀终端有限公司 Video playback method, and device
CN112367561A (en) * 2020-10-27 2021-02-12 南京维沃软件技术有限公司 Barrage display method and device, electronic equipment and storage medium
CN112770129A (en) * 2020-12-31 2021-05-07 深圳市镜玩科技有限公司 Live broadcast-based group chat establishing method, related device, equipment and medium
CN112770129B (en) * 2020-12-31 2023-08-29 深圳市镜玩科技有限公司 Live broadcast-based group chat establishing method, device, server and medium
CN112822559A (en) * 2021-01-05 2021-05-18 广州视源电子科技股份有限公司 Bullet screen data processing method and processing device
CN113556277A (en) * 2021-07-16 2021-10-26 网易(杭州)网络有限公司 Message processing method and device, nonvolatile storage medium and electronic device
CN113905125A (en) * 2021-09-08 2022-01-07 维沃移动通信有限公司 Video display method and device and electronic equipment
CN113905125B (en) * 2021-09-08 2023-02-21 维沃移动通信有限公司 Video display method and device, electronic equipment and storage medium
CN115103054A (en) * 2021-09-22 2022-09-23 维沃移动通信(杭州)有限公司 Information processing method, information processing apparatus, electronic device, and medium
CN115103054B (en) * 2021-09-22 2023-10-13 维沃移动通信(杭州)有限公司 Information processing method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN110944224B (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN110944224B (en) Video playing method and electronic equipment
CN111049663B (en) Method, electronic device and medium for creating topic group
CN110087117B (en) Video playing method and terminal
CN111010332A (en) Group chat method and electronic equipment
CN110933511B (en) Video sharing method, electronic device and medium
CN111596818A (en) Message display method and electronic equipment
CN110099296B (en) Information display method and terminal equipment
CN110045939A (en) A kind of multi-screen control method and terminal
CN110602565A (en) Image processing method and electronic equipment
CN110971970B (en) Video processing method and electronic equipment
CN110944236B (en) Group creation method and electronic device
CN109828731B (en) Searching method and terminal equipment
CN107786827A (en) Video capture method, video broadcasting method, device and mobile terminal
CN111030917B (en) Message display method and electronic equipment
CN110087149A (en) A kind of video image sharing method, device and mobile terminal
CN110989950A (en) Sharing control method and electronic equipment
CN111383175A (en) Picture acquisition method and electronic equipment
CN110597437A (en) Screen capturing method and terminal equipment
CN110752981A (en) Information control method and electronic equipment
CN108052258B (en) Terminal task processing method, task processing device and mobile terminal
CN111399715B (en) Interface display method and electronic equipment
CN111049977B (en) Alarm clock reminding method and electronic equipment
CN110958350B (en) Notification message processing method and electronic equipment
CN109766156B (en) Session creation method and terminal equipment
CN111610909B (en) Screenshot method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant