CN110602565A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN110602565A
CN110602565A CN201910814241.2A CN201910814241A CN110602565A CN 110602565 A CN110602565 A CN 110602565A CN 201910814241 A CN201910814241 A CN 201910814241A CN 110602565 A CN110602565 A CN 110602565A
Authority
CN
China
Prior art keywords
image
user
input
video interface
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910814241.2A
Other languages
Chinese (zh)
Inventor
庄文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910814241.2A priority Critical patent/CN110602565A/en
Publication of CN110602565A publication Critical patent/CN110602565A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an image processing method and electronic equipment, and relates to the technical field of communication. The image processing method comprises the following steps: intercepting a first image currently displayed on a target video interface under the condition that first bullet screen information is displayed on the target video interface; receiving a first input of the first image by a user; in response to the first input, sending the first image to a target object. According to the embodiment of the invention, under the condition that the first barrage information is displayed on the target video interface, the screenshot is carried out on the target video interface, and the intercepted image is shared by other users watching the video at the same time, so that the users can conveniently and quickly intercept the interested picture and store and share the image content in the process of watching the video, each video viewer can see the picture and the barrage information shared by other people, and the operation time is reduced.

Description

Image processing method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and electronic equipment.
Background
With the intelligent development of electronic equipment, the influence of the electronic equipment on the daily life of a user is greater and greater. The user can utilize electronic equipment to enrich amateur life, and can be convenient to realize social contact with other people. The method for watching videos by using the electronic equipment is an entertainment mode commonly used by users, and many users like to share videos to friends for watching, and meanwhile share interesting video pictures with other people in the process of watching the videos. In the prior art, a user can only use the assistance of other application programs to realize the sharing of video pictures, and the interactive operation among a plurality of users watching videos is not friendly, so that the experience of watching the videos by the users is influenced.
Disclosure of Invention
The embodiment of the invention provides an image processing method and electronic equipment, and aims to solve the problem that information interaction capacity between users watching the same video at the same time is poor.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including:
intercepting a first image currently displayed on a target video interface under the condition that first bullet screen information is displayed on the target video interface;
receiving a first input of the first image by a user;
in response to the first input, sending the first image to a target object.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the image intercepting module is used for intercepting a first image currently displayed on a target video interface under the condition that first bullet screen information is displayed on the target video interface;
the first receiving module is used for receiving a first input of a user to the first image;
a first response module to send the first image to a target object in response to the first input.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the image processing method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the image processing method as described above.
Therefore, in the embodiment of the invention, under the condition that the first barrage information is displayed on the target video interface, the screenshot is carried out on the target video interface, and the intercepted image is shared by other users watching the video at the same time, so that the users can conveniently and quickly intercept the interested picture and store or share the image content in the process of watching the video, each video viewer can see the picture and the barrage information shared by other people, and the operation time is reduced.
Drawings
FIG. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating operation of a third input according to an embodiment of the present invention;
FIG. 3 is a second flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a first marker in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of a first predetermined area according to an embodiment of the present invention;
FIG. 6 is a third flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 7 is a diagram of a second image according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating an operation of image sharing according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating an image editing mode according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating an operation of barrage editing according to an embodiment of the present invention;
fig. 11 is a second operation diagram illustrating barrage editing according to the embodiment of the present invention;
FIG. 12 is a schematic view of a second predetermined area according to an embodiment of the present invention;
FIG. 13 is a second schematic diagram of a second predetermined area according to the embodiment of the present invention;
FIG. 14 is a diagram illustrating the operation of storing an image according to an embodiment of the present invention;
FIG. 15 is a diagram illustrating operations for deleting an image according to an embodiment of the present invention;
FIG. 16 is a schematic structural diagram of an electronic device according to an embodiment of the invention;
fig. 17 is a schematic structural diagram of an electronic device according to another embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an image processing method according to an embodiment of the present invention is applied to a first electronic device, and includes:
step 101, intercepting a first image currently displayed on a target video interface under the condition that first bullet screen information is displayed on the target video interface;
the target video interface is a video interface which is currently watched by a user, when the user watches a video, if video content which the user wants to share with others exists, a barrage can be sent in the currently watched target video interface, that is, the first barrage information is sent by the user using the electronic device, it needs to be noted that the first barrage information can be a barrage with any content and any form, and can also be a barrage with specific content and any form.
When the first bullet screen information is displayed on the target video interface, if the user has a screenshot requirement, intercepting the content currently displayed on the target video interface to generate a first image. When the first electronic device acquires the first bullet screen information, the content currently displayed by the video can be immediately and actively intercepted as a first image, and the first image can also be intercepted after the preset operation of the user on the first bullet screen information is acquired.
As the first bullet screen information can be bullet screens with any content and any form, or bullet screens with specific content and any form, if the first bullet screen information is set as the bullet screen with any content and any form, that is, as long as the electronic device detects the first bullet screen information sent by its own user, the electronic device performs image capture operation, and stores the captured first image; if the first bullet screen information is set as a bullet screen with specific content and form, for example, the content of the first bullet screen information is set as a screenshot, the electronic device performs an image capture operation only when detecting that the first bullet screen information sent by its own user is the screenshot, and stores the captured first image.
102, acquiring a first input of a user to the first image;
the first input is used for identifying the image sharing requirement of the user, and the first input can be set according to the requirement. Optionally, the first input is an operation of dragging the first image to slide left by a finger of a user, and the electronic device obtains the input of the user to the first image in real time after performing a screenshot operation on the target video interface.
Step 103, responding to the first input, and sending the first image to a target object.
In this embodiment, when the electronic device obtains a first input of the user to the first image, and it is considered that the user has a need to share the first image, the first image is sent to a target object. The target object can be an electronic device held by other users who are watching the target video interface currently. After the electronic equipment sends the first image to a target object, the first image is displayed on the target video interface of the target object, and other users watching the target video interface on the target object can also see the first image, so that video pictures of different users can be shared in real time.
Optionally, the first image can be shared by dragging the first barrage information, for example, a user can drag the barrage to slide out of the screen leftwards by two fingers, when the electronic device detects that the user drags the first barrage information and slides out of the screen leftwards, the first image is sent to a target object, and the first image can be shared more quickly.
In the embodiment, when the first barrage information is displayed on the target video interface, the image currently displayed on the target video interface is intercepted, and the intercepted image is shared by other users watching videos at the same time, so that the users can conveniently and quickly intercept the interested picture and store and share the image content in the process of watching videos, each video viewer can see the picture and the barrage information shared by other people (including the video viewer), and the operation time is shortened.
It should be noted that the target video interface includes a first preset area, where the first preset area displays at least one thumbnail, and each thumbnail indicates one screenshot image; the thumbnails are arranged in the time sequence of the screenshot images.
The first preset area may be located in any area of the target video interface, such as the left side or the right side of the target video interface. After the screenshot operation is carried out on the target video interface, the intercepted images are displayed in the first preset area in a thumbnail mode, and a plurality of thumbnails can be displayed in an arrangement mode according to the intercepting time of the images.
Optionally, the method further comprises: receiving a second input of the thumbnail from the user; and responding to the second input, and updating the thumbnail displayed in the first preset area.
The second input may be a sliding input of the thumbnail by the user, that is, the user can slide the thumbnail up and down by a finger to view the preview of the other screenshot images by turning pages up and down.
Specifically, the first image includes the first bullet screen information, and the first bullet screen information and the first image are distributed on different image layers.
When first barrage information is displayed on a target video interface, a first image currently displayed on the target video interface is intercepted, and the first image comprises the first barrage information input by a user. And the first bullet screen information and the first image are displayed and stored by different image layers respectively. In this way, the user can implement the editing operation of the first barrage information alone, or process the first image alone.
Optionally, the method of intercepting the first image currently displayed by the target video interface may include, but is not limited to, the following implementation manners:
the first method is as follows: when first barrage information is displayed on a target video interface, the electronic equipment actively intercepts the currently displayed content of the target video interface as a first image;
the second method comprises the following steps: when the first bullet screen information is displayed on the target video interface, the electronic equipment does not perform the operation of actively intercepting the image, and firstly: receiving a third input of the first barrage information within a preset time period from a user; and in response to the third input, intercepting a first image currently displayed by the target video interface.
In a second mode, the content of the first barrage information may include: and (4) text content and display time, such as 'screenshot +15 s', namely the first barrage information is displayed on the target video interface for 15 seconds. The preset time period is the display time of the first bullet screen information. In the preset time period, if a third input of the first barrage information by the user is received and the user is considered to have a screenshot requirement, intercepting a video frame currently displayed on the target video interface to generate a first image; and if the third input is not received within a preset time period, not capturing the screen, and the first bullet screen information disappears after the preset time period. Optionally, the third input is: and dragging the first bullet screen information to slide out of the target video interface along a preset direction by a user for inputting.
Taking the content of the first barrage information as "screenshot +15 s" and the third input as an example that the user drags the first barrage information to slide out of the target video interface leftward through one finger, as shown in fig. 2, the user drags the barrage information of the screenshot +15s displayed in the target video interface through the finger and slides out of the target video interface leftward, and then the electronic device intercepts the image currently displayed on the target video interface.
According to the embodiment, the bullet screen information with the display time is sent, and the bullet screen information is manually captured through operation, so that a user can more accurately capture the target video frame.
Optionally, as shown in fig. 3, after the step 102, the method further includes:
301, displaying a first identifier on the target video interface;
the electronic equipment carries out screenshot operation on the content currently displayed on the target video interface, generates a first image and stores the first image in a background, and then displays the first identification on the target video interface, wherein the first identification can be an operating button which can be displayed in a flashing mode so as to prompt a user. The first identification is used for controlling the thumbnail of the screenshot image to be expanded and displayed or hidden in the target video interface, and a user can check the thumbnail of the screenshot image by operating the first identification. If the user does not operate the first identifier within a preset time period, the first identifier is automatically hidden so as to avoid influencing the experience of the user in watching the video, and the first identifier is automatically popped up when the user clicks the target video interface.
Step 302, receiving a fourth input of the first identifier by the user;
the fourth input may be set according to a requirement, such as a sliding input, a clicking input, a long-time pressing input, and the like of the first identifier by a user, and optionally, the fourth input is an operation of dragging the first identifier to slide right by the user.
Step 303, in response to the fourth input, displaying a thumbnail of the first image in a first preset area of the target video interface.
And after receiving a fourth input of the first identifier by the user, the electronic device displays a thumbnail of the first image in a first preset area in the target video interface, and optionally, the first preset area is located on the left side of the target video interface.
Taking the first identifier as an arrow button 1 as an example, as shown in fig. 4, if a user drags the arrow button 1 to the right, the first preset area is displayed on the left side of the target video interface, as shown in fig. 5, the arrow direction of the arrow button 1 changes, and the user can visually view the thumbnail of the first image in the first preset area; if the user wants to hide the first preset area, the arrow button 1 can be dragged to slide left, the first preset area is hidden, the mode of video full-screen playing is returned, and the target video continues to be played in full screen.
Optionally, as shown in fig. 5, the first preset area includes a second identifier 2, the first barrage information is displayed in the first image, the second identifier 2 may be disposed below the first display area, and the second identifier is used to control the first barrage information to be displayed or disappear in the first image.
The second identifier may be a button as shown in fig. 5. After the electronic device intercepts a first image currently displayed on the target video interface, a thumbnail of the first image is displayed in the first preset area, and the first image contains the first barrage information, as shown in fig. 5, if the second identifier is in an open state, the first barrage information is displayed in the first image, and if a user can slide the second identifier leftwards, the first barrage information disappears in the first image.
Optionally, as shown in fig. 6, the method further includes:
601, receiving a fifth input of a user to a first preset area of the target video interface;
the fifth input may be an input by the user sliding left with two fingers together. The fifth input is used for identifying a requirement that a user combines and shares a plurality of images, that is, if the user wants to combine a plurality of images in the first preset area and share the combined images, the fifth input can be performed at any position of the first preset area, and the electronic device obtains the fifth input of the user to the first preset area in real time.
Step 602, responding to the fifth input, performing stitching processing on part or all of the images displayed in the first preset area to generate a second image;
and the electronic equipment acquires thumbnails of all or part of images input by the user to the fifth preset area and splices the thumbnails to form an image, namely the second image.
Step 603, displaying the thumbnail of the second image in the first preset area, and sending the second image to a target object.
And after the thumbnails of all the images in the first preset area are spliced to generate the second image, the thumbnail of the second image is displayed in the first preset area. Meanwhile, the electronic device sends the second image to a target object, the target object can be electronic equipment held by other users who are watching the target video interface at present, and the other users who are watching the target video interface at present can also see the second image, so that real-time sharing of video pictures among different users can be realized.
Optionally, after the second image is successfully shared, a graphic for indicating successful sharing may be displayed in the second image. The spliced second image contains the contents of the plurality of images, so that the function of sharing the contents of the plurality of images in one operation is realized, and the operation time is reduced.
Taking the thumbnail of the image a and the image B in the first preset area and the successfully shared mark as the heart-shaped graph as an example, the successfully shared second image is as shown in fig. 7, the second image still includes the contents of the image a and the image B and includes the bullet-screen information a of the image a and the bullet-screen information B of the image B, and the heart-shaped graph is displayed at the lower right of the second image to indicate that the second image is successfully shared.
It should be noted that, if the user intends to share the partial image displayed in the first preset region, the thumbnail of the image in the first preset region may be changed to a selectable state by long-pressing the first preset region, the user may select a plurality of images to be shared, and then perform the first input on the selected images, and the electronic device may perform stitching processing on the selected plurality of images, and send the images generated after the stitching to the target object.
Taking the thumbnail of the image a and the thumbnail of the image B in the first preset area, and taking the first input as an operation that the user drags the thumbnail of the image by using one finger to slide left, as shown in fig. 8, when the user shares the image B, one finger drags the thumbnail of the image B to slide left, and other users watching the target video interface at the same time can receive and watch the image B. The user can also select the image A and the image B at the same time, and simultaneously share the image A and the image B through left sliding of fingers.
Optionally, the method further comprises:
receiving a sixth input of the user to the N thumbnail images displayed in the first preset area; responding to the six inputs, carrying out image splicing on the N thumbnail images, and generating a third image; displaying a thumbnail of the third image in the first preset area, and sending the third image to a target object; wherein N is an integer greater than 1.
The sixth input may be a user selection input to the N thumbnail, such as long-pressing the thumbnail, and when the thumbnail becomes a selectable state, the thumbnail is selected. And when the electronic equipment receives the sixth input, the user is considered to intend to perform splicing processing on the selected thumbnails, and then image splicing is performed on the N thumbnail to generate a third image, wherein the thumbnails of the third image are displayed in the first preset area, and the third image comprises the images of the N thumbnail and the bullet screen information in the images.
The electronic device sends the third image to a target object, the target object may be an electronic device held by another user who is watching the target video interface at present, and the other user who is watching the target video interface at present can also see the third image, so that real-time sharing of video pictures among different users can be realized.
Optionally, the user may also edit the image in the first preset area or the bullet screen information in the image. Specifically, the electronic device receives a seventh input of second bullet screen information in a first thumbnail in a first preset area of the target video interface from a user; and responding to the seventh input, and updating the bullet screen content of the second bullet screen information.
Wherein the seventh input may be an operation of writing updated bullet screen content for a user. For example: the user can double-click the thumbnail of the image B in the first preset area, and then the image B is displayed in the middle of the upper layer non-full screen of the target video interface, as shown in fig. 9, at this time, the image B enters an editing mode, the bullet screen information B in the image B also enters the editing mode, and the user can add bullet screen information to the image B and also modify the content of the bullet screen information B.
As shown in fig. 10, assuming that the bullet screen information modified by the user is bullet screen information B1, after the editing of the image or bullet screen is completed, dragging the enlarged image B to slide left, automatically saving the modification, and exiting the editing mode; as shown in fig. 11, after the editing of the image or the barrage is completed, the image B is dragged to slide right, the modification is cancelled, and the editing mode is automatically exited.
Specifically, the method may further include:
under the condition that third bullet screen information sent by a target user is displayed on the target video interface, displaying a thumbnail of a fourth image in a second preset area of the target video interface;
wherein the target user may be a user who is simultaneously viewing the target video interface; the third bullet screen information is bullet screen information sent by other users who watch the target video interface simultaneously; the fourth image is an image sent by other users who watch the target video interface simultaneously. In the process of watching the video, the user using the electronic equipment can also receive images shared by other users watching the same video. When the third bullet screen information sent by the target user is displayed on the target video interface, if the images shared by other users are received, displaying the fourth images sent by other users in a second preset area of the target video interface.
The second preset region and the first preset region may be the same region or different regions. Taking the second preset area and the first preset area as the same area as an example, as shown in fig. 12, when the third bullet screen information sent by the target user is displayed on the target video interface, an arrow button 1 is displayed on the target video interface, and when the arrow button 1 is dragged by the user to slide right, the second preset area is expanded, and the user can view thumbnails of images C sent by other users in the second preset area, as shown in fig. 13. It should be noted that, because the second preset area is the same as the first preset area, not only the image C sent by another user but also the images a and B generated by the user screenshot of the electronic device itself are displayed in the second preset area.
Optionally, the method for displaying images sent by other users who view the target video interface simultaneously may further include:
displaying third barrage information sent by a target user on the target video interface, and displaying a thumbnail of a fourth image in a second preset area of the target video interface under the condition that eighth input of the third barrage information by the user is received; and the fourth image is a screenshot image sent by the target user.
In this embodiment, in the process of watching a video, if the target video interface displays third barrage information sent by a target user and an eighth input of the third barrage information by the user is received, it is considered that an image shared by other users is received, and a thumbnail of the fourth image sent by other users is displayed in a second preset area of the target video interface. The eighth input may be a click input, a drag input, and the like of the third bullet screen information by the user.
It should be noted that the second preset area and the first preset area may be the same area, that is: the first preset area (or the second preset area) not only displays thumbnails of the images intercepted by the electronic equipment, but also can display received thumbnails of the fourth images shared by other users; the second preset area and the first preset area may also be different areas, for example: the first preset area is the left side of the target video interface, and the second preset area is the right side of the target video interface.
Further, after acquiring a fourth image sent by the target user, the electronic device may store the fourth image, optionally:
receiving a ninth input of the fourth image by the user; in response to the ninth input, storing the fourth image.
The ninth input is used to identify the image storage requirement of the user, and the ninth input may be an input by the user sliding to the lower left of the screen with one finger. And after receiving the fourth image shared by other users, the electronic equipment acquires the operation information of the user on the fourth image in real time. And the electronic equipment acquires ninth input of the user on the fourth image, and stores the fourth image in a preset storage path of the electronic equipment, such as an album folder, if the user needs to store the fourth image.
Taking the ninth input as an input that the user slides to the lower left of the screen through one finger, and taking the fourth image shared by other users as an image C, as shown in fig. 14, the image C includes first bullet screen information sent by the user of the electronic device itself, and also includes third bullet screen information sent by other users, and the user slides to the lower left through a finger at the thumbnail position of the image C, so that the electronic device automatically stores the image C in a preset storage path.
In this embodiment, the electronic device can acquire images and barrage information shared by other users in real time, so that information interaction when multiple users watch the same video together is realized.
Optionally, when the second preset area includes a plurality of thumbnails of images sent by the target user, the user may also store a plurality of images at the same time. For example: the user can slide to the edge of the screen by one finger at any position of the second preset area, and the user thinks that the user intends to combine and store a plurality of images. And the electronic equipment splices all the images in the second preset area to form an image, and stores the spliced image in a preset storage path of the electronic equipment, such as an album folder.
If the user intends to store the partial images displayed in the second preset area, the thumbnails of the images in the second preset area may be changed to a selectable state by pressing the second preset area for a long time, and the user may select a plurality of images to be stored and then store the selected images.
After the first image intercepted by the electronic equipment or the fourth image shared by other users is displayed on the target video interface, the user can edit the image. For example: a user can drag a thumbnail of an image to slide right by one finger to delete the image, as shown in fig. 15, if the user drags the image B to slide right, the electronic device deletes the image B; or deleting all images by sliding the two fingers right together; or selecting thumbnails of a plurality of images, dragging the selected thumbnails to slide right by one finger, and deleting the selected images. It should be noted that, in this embodiment, all operations of the user include, but are not limited to, the above forms, and the user may also implement the functions of merging, sharing, storing, deleting, amplifying, editing, and the like through other forms of operations.
In summary, in the embodiment of the present invention, the user performs real-time screenshot on the video image by sending the bullet screen information, and shares the captured image with other users watching the video at the same time through operation, so that the user can conveniently and quickly capture the interested image and store and share the image content in the process of watching the video, and each video viewer can see the image and the bullet screen information shared by others, thereby reducing the operation time.
FIG. 16 is a block diagram of an electronic device of one embodiment of the invention. The electronic device 160 shown in fig. 16 includes an image cutout module 161, a first receiving module 162, and a first response module 163.
The image capturing module 161 is configured to capture a first image currently displayed on a target video interface when first barrage information is displayed on the target video interface;
a first receiving module 162, configured to receive a first input of the first image by a user;
a first response module 163 to send the first image to a target object in response to the first input.
On the basis of fig. 16, optionally, the target video interface includes a first preset area displaying at least one thumbnail, each thumbnail indicating one screenshot image;
the thumbnails are arranged in the time sequence of the screenshot images.
Optionally, the electronic device further comprises:
the second receiving module is used for receiving a second input of the thumbnail from the user;
and the second response module is used for responding to the second input and updating the thumbnail displayed in the first preset area.
Optionally, the first image includes the first bullet screen information, and the first bullet screen information and the first image are distributed on different image layers.
Optionally, the image intercepting module 161 includes:
the first receiving unit is used for receiving a third input of the first barrage information within a preset time period;
and the first response unit is used for responding to the third input and intercepting a first image currently displayed by the target video interface.
Optionally, the third input is: and dragging the first bullet screen information to slide out of the target video interface along a preset direction by a user for inputting.
Optionally, the electronic device further comprises:
the first display module is used for displaying a first identifier on the target video interface;
the third receiving module is used for receiving fourth input of the first identifier by the user;
and the third response module is used for responding to the fourth input and displaying the thumbnail of the first image in a first preset area of the target video interface.
Optionally, the electronic device further comprises:
the fourth receiving module is used for acquiring a fifth input of the user to the first preset area of the target video interface;
the fourth response module is used for responding to the five inputs, performing splicing processing on part or all of the thumbnails displayed in the first preset area, and generating a second image;
and the second display module is used for displaying the thumbnail of the second image in the first preset area and sending the second image to a target object.
Optionally, the electronic device further comprises:
a fifth receiving module, configured to receive a sixth input of the N thumbnail images displayed in the first preset area from the user;
a fifth response module, configured to perform image stitching on the N thumbnail images in response to the six inputs, and generate a third image;
the third display module is used for displaying the thumbnail of the third image in the first preset area and sending the third image to a target object;
wherein N is an integer greater than 1.
Optionally, the electronic device further comprises:
the sixth receiving module is used for receiving seventh input of second bullet screen information in a first thumbnail in a first preset area of the target video interface by a user;
and the sixth response module is used for responding to the seventh input and updating the bullet screen content of the second bullet screen information.
Optionally, the electronic device further comprises:
the fourth display module is used for displaying a thumbnail of a fourth image in a second preset area of the target video interface under the condition that third bullet screen information sent by a target user is displayed on the target video interface;
the fifth display module is used for displaying third barrage information sent by a target user on the target video interface, and displaying a thumbnail of a fourth image in a second preset area of the target video interface under the condition that eighth input of the third barrage information by the user is received;
and the fourth image is a screenshot image sent by the target user.
Optionally, the electronic device further comprises:
a seventh receiving module, configured to receive a ninth input of the fourth image by the user;
a seventh response module to store the fourth image in response to the ninth input.
The electronic device 16 can implement each process implemented by the electronic device in the method embodiments of fig. 1 to fig. 15, and details are not described here to avoid repetition. According to the embodiment of the invention, the electronic equipment carries out real-time screenshot on the video picture through the bullet screen information sent by the user, and shares the intercepted image to other users watching the video simultaneously, so that the users can conveniently and quickly intercept the interested picture and store and share the image content in the process of watching the video, each video watcher can see the picture and bullet screen information shared by other people, and the operation time is reduced.
Fig. 17 is a schematic diagram of a hardware structure of an electronic device 1700 for implementing various embodiments of the present invention, where the electronic device 1700 includes, but is not limited to: radio frequency unit 1701, network module 1702, audio output unit 1703, input unit 1704, sensor 1705, display unit 1706, user input unit 1707, interface unit 1708, memory 1709, processor 1710, and power supply 1711. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 17 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
The processor 1710 is configured to intercept a first image currently displayed by a target video interface under the condition that the first barrage information is displayed on the target video interface; receiving a first input of the first image by a user; in response to the first input, sending the first image to a target object.
Therefore, the electronic equipment carries out real-time screenshot on the video picture through the bullet screen information sent by the user, and shares the intercepted image to other users watching videos at the same time, so that the users can conveniently and quickly intercept the interested picture and store and share the image content in the process of watching the videos, each video viewer can see the picture and bullet screen information shared by other users, and the operation time is shortened.
It should be understood that, in the embodiment of the present invention, the rf unit 1701 may be configured to receive and transmit signals during a message transmission or a call, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 1710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 1701 includes, but is not limited to, an antenna, at least one amplifier, transceiver, coupler, low noise amplifier, duplexer, and the like. The radio frequency unit 1701 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 1702, such as to assist the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 1703 may convert audio data received by the radio frequency unit 1701 or the network module 1702 or stored in the memory 1709 into an audio signal and output as sound. Also, the audio output unit 1703 may provide audio output related to a specific function performed by the electronic apparatus 1700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1703 includes a speaker, a buzzer, a receiver, and the like.
Input unit 1704 is used to receive audio or video signals. The input Unit 1704 may include a Graphics Processing Unit (GPU) 17041 and a microphone 17042, the Graphics processor 17041 Processing image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 1706. The image frames processed by the graphics processor 17041 may be stored in the memory 1709 (or other storage medium) or transmitted via the radio frequency unit 1701 or the network module 1702. The microphone 17042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1701 in the case of the phone call mode.
The electronic device 1700 also includes at least one sensor 1705, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 17061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 17061 and/or the backlight when the electronic device 1700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 1706 is used to display information input by the user or information provided to the user. The Display unit 1706 may include a Display panel 17061, and the Display panel 17061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 1707 includes a touch panel 17071 and other input devices 17072. Touch panel 17071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on touch panel 17071 or near touch panel 17071 using a finger, stylus, or any other suitable object or attachment). The touch panel 17071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1710, and receives and executes commands sent by the processor 1710. In addition, the touch panel 17071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to touch panel 17071, user input unit 1707 may include other input devices 17072. In particular, the other input devices 17072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 17071 can be overlaid on the display panel 17061, and when the touch panel 17071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the processor 1710 to determine the type of the touch event, and then the processor 1710 provides a corresponding visual output on the display panel 17061 according to the type of the touch event. Although the touch panel 17071 and the display panel 17061 are shown in fig. 17 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 17071 may be integrated with the display panel 17061 to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 1708 is an interface for connecting an external device to the electronic apparatus 1700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 1700 or may be used to transmit data between the electronic apparatus 1700 and the external device.
The memory 1709 may be used to store software programs as well as various data. The memory 1709 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1709 may include high speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1710 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 1709 and calling data stored in the memory 1709, thereby integrally monitoring the electronic device. Processor 1710 may include one or more processing units; preferably, the processor 1710 can integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1710.
The electronic device 1700 may further include a power source 1711 (e.g., a battery) for powering the various components, and preferably, the power source 1711 may be logically coupled to the processor 1710 via a power management system to manage charging, discharging, and power consumption via the power management system.
In addition, the electronic device 1700 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and when the computer program is executed by the processor, the computer program implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (15)

1. An image processing method, comprising:
intercepting a first image currently displayed on a target video interface under the condition that first bullet screen information is displayed on the target video interface;
receiving a first input of the first image by a user;
in response to the first input, sending the first image to a target object.
2. The image processing method according to claim 1, wherein the target video interface includes a first preset area displaying at least one thumbnail, each thumbnail indicating one screenshot image;
the thumbnails are arranged in the time sequence of the screenshot images.
3. The image processing method according to claim 2, characterized in that the method further comprises:
receiving a second input of the thumbnail from the user;
and responding to the second input, and updating the thumbnail displayed in the first preset area.
4. The image processing method according to claim 1, wherein the first image includes the first bullet screen information, and the first bullet screen information and the first image are distributed in different image layers.
5. The image processing method according to claim 1, wherein the intercepting a first image currently displayed by the target video interface comprises:
receiving a third input of the first barrage information within a preset time period from a user;
and in response to the third input, intercepting a first image currently displayed by the target video interface.
6. The image processing method of claim 5, wherein the third input is: and dragging the first bullet screen information to slide out of the target video interface along a preset direction by a user for inputting.
7. The image processing method according to claim 1, wherein after the step of intercepting the first image currently displayed by the target video, the method further comprises:
displaying a first identifier on the target video interface;
receiving a fourth input of the first identification by the user;
and responding to the fourth input, and displaying a thumbnail of the first image in a first preset area of the target video interface.
8. The image processing method according to claim 2, characterized in that the method further comprises:
receiving a fifth input of a user to a first preset area of the target video interface;
responding to the five inputs, performing splicing processing on part or all of the thumbnails displayed in the first preset area to generate a second image;
and displaying the thumbnail of the second image in the first preset area, and sending the second image to a target object.
9. The image processing method according to claim 2, characterized in that the method further comprises:
receiving a sixth input of the user to the N thumbnail images displayed in the first preset area;
responding to the six inputs, carrying out image splicing on the N thumbnail images, and generating a third image;
displaying a thumbnail of the third image in the first preset area, and sending the third image to a target object;
wherein N is an integer greater than 1.
10. The image processing method according to claim 1, characterized in that the method further comprises:
receiving a seventh input of a user to second bullet screen information in a first thumbnail in a first preset area of the target video interface;
and responding to the seventh input, and updating the bullet screen content of the second bullet screen information.
11. The image processing method according to claim 1, characterized in that the method further comprises:
under the condition that third bullet screen information sent by a target user is displayed on the target video interface, displaying a thumbnail of a fourth image in a second preset area of the target video interface;
alternatively, the first and second electrodes may be,
displaying third barrage information sent by a target user on the target video interface, and displaying a thumbnail of a fourth image in a second preset area of the target video interface under the condition that eighth input of the third barrage information by the user is received;
and the fourth image is a screenshot image sent by the target user.
12. The image processing method according to claim 11, further comprising:
receiving a ninth input of the fourth image by the user;
in response to the ninth input, storing the fourth image.
13. An electronic device, comprising:
the image intercepting module is used for intercepting a first image currently displayed on a target video interface under the condition that first bullet screen information is displayed on the target video interface;
the first receiving module is used for receiving a first input of a user to the first image;
a first response module to send the first image to a target object in response to the first input.
14. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 12.
15. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 12.
CN201910814241.2A 2019-08-30 2019-08-30 Image processing method and electronic equipment Pending CN110602565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910814241.2A CN110602565A (en) 2019-08-30 2019-08-30 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910814241.2A CN110602565A (en) 2019-08-30 2019-08-30 Image processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN110602565A true CN110602565A (en) 2019-12-20

Family

ID=68856883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910814241.2A Pending CN110602565A (en) 2019-08-30 2019-08-30 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110602565A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190528A (en) * 2019-12-31 2020-05-22 维沃移动通信有限公司 Brush display method, electronic equipment and storage medium
CN111405344A (en) * 2020-03-18 2020-07-10 腾讯科技(深圳)有限公司 Bullet screen processing method and device
CN111600931A (en) * 2020-04-13 2020-08-28 维沃移动通信有限公司 Information sharing method and electronic equipment
CN111726676A (en) * 2020-07-03 2020-09-29 腾讯科技(深圳)有限公司 Image generation method, display method, device and equipment based on video
CN111880675A (en) * 2020-06-19 2020-11-03 维沃移动通信(杭州)有限公司 Interface display method and device and electronic equipment
CN111954079A (en) * 2020-05-27 2020-11-17 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN112099706A (en) * 2020-09-04 2020-12-18 深圳市欢太科技有限公司 Page display method and device, electronic equipment and computer readable storage medium
CN112269524A (en) * 2020-10-30 2021-01-26 维沃移动通信有限公司 Screen capturing method, screen capturing device and electronic equipment
CN112764632A (en) * 2020-12-28 2021-05-07 维沃移动通信有限公司 Image sharing method and device and electronic equipment
CN115623227A (en) * 2021-07-12 2023-01-17 北京字节跳动网络技术有限公司 Live video photographing method, device and equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104661096A (en) * 2013-11-21 2015-05-27 深圳市快播科技有限公司 Video barrage adding method and device, video playing method and video player
CN107040824A (en) * 2017-04-18 2017-08-11 深圳市金立通信设备有限公司 A kind of method and terminal for sending barrage
US20180102060A1 (en) * 2013-09-30 2018-04-12 BrainPOP IP LLC System and method for managing pedagogical content
CN108200463A (en) * 2018-01-19 2018-06-22 上海哔哩哔哩科技有限公司 The generation system of the generation method of barrage expression packet, server and barrage expression packet

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180102060A1 (en) * 2013-09-30 2018-04-12 BrainPOP IP LLC System and method for managing pedagogical content
CN104661096A (en) * 2013-11-21 2015-05-27 深圳市快播科技有限公司 Video barrage adding method and device, video playing method and video player
CN107040824A (en) * 2017-04-18 2017-08-11 深圳市金立通信设备有限公司 A kind of method and terminal for sending barrage
CN108200463A (en) * 2018-01-19 2018-06-22 上海哔哩哔哩科技有限公司 The generation system of the generation method of barrage expression packet, server and barrage expression packet

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111190528B (en) * 2019-12-31 2022-09-16 维沃移动通信有限公司 Brush display method, electronic equipment and storage medium
CN111190528A (en) * 2019-12-31 2020-05-22 维沃移动通信有限公司 Brush display method, electronic equipment and storage medium
CN111405344A (en) * 2020-03-18 2020-07-10 腾讯科技(深圳)有限公司 Bullet screen processing method and device
CN111600931A (en) * 2020-04-13 2020-08-28 维沃移动通信有限公司 Information sharing method and electronic equipment
CN111954079A (en) * 2020-05-27 2020-11-17 维沃移动通信有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN111954079B (en) * 2020-05-27 2023-05-26 维沃移动通信有限公司 Image processing method, device, electronic equipment and medium
CN111880675A (en) * 2020-06-19 2020-11-03 维沃移动通信(杭州)有限公司 Interface display method and device and electronic equipment
CN111880675B (en) * 2020-06-19 2024-03-15 维沃移动通信(杭州)有限公司 Interface display method and device and electronic equipment
CN111726676A (en) * 2020-07-03 2020-09-29 腾讯科技(深圳)有限公司 Image generation method, display method, device and equipment based on video
CN112099706A (en) * 2020-09-04 2020-12-18 深圳市欢太科技有限公司 Page display method and device, electronic equipment and computer readable storage medium
CN112269524A (en) * 2020-10-30 2021-01-26 维沃移动通信有限公司 Screen capturing method, screen capturing device and electronic equipment
CN112764632A (en) * 2020-12-28 2021-05-07 维沃移动通信有限公司 Image sharing method and device and electronic equipment
CN115623227A (en) * 2021-07-12 2023-01-17 北京字节跳动网络技术有限公司 Live video photographing method, device and equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
WO2019137429A1 (en) Picture processing method and mobile terminal
CN110602565A (en) Image processing method and electronic equipment
CN110087117B (en) Video playing method and terminal
CN110096326B (en) Screen capturing method, terminal equipment and computer readable storage medium
CN108491129B (en) Application program management method and terminal
CN109343755B (en) File processing method and terminal equipment
CN110933511B (en) Video sharing method, electronic device and medium
CN109525874B (en) Screen capturing method and terminal equipment
CN110196667B (en) Notification message processing method and terminal
CN110933306A (en) Method for sharing shooting parameters and electronic equipment
CN109828706B (en) Information display method and terminal
CN108174103B (en) Shooting prompting method and mobile terminal
CN108920226B (en) Screen recording method and device
CN109862266B (en) Image sharing method and terminal
CN109739407B (en) Information processing method and terminal equipment
CN108228902B (en) File display method and mobile terminal
CN110196668B (en) Information processing method and terminal equipment
CN108108079B (en) Icon display processing method and mobile terminal
CN110865745A (en) Screen capturing method and terminal equipment
CN108121486B (en) Picture display method and mobile terminal
CN109271262B (en) Display method and terminal
CN111770374B (en) Video playing method and device
CN108804628B (en) Picture display method and terminal
CN111383175A (en) Picture acquisition method and electronic equipment
CN108132749B (en) Image editing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220