CN114546229A - Information processing method, screen capturing method and electronic equipment - Google Patents

Information processing method, screen capturing method and electronic equipment Download PDF

Info

Publication number
CN114546229A
CN114546229A CN202210043272.4A CN202210043272A CN114546229A CN 114546229 A CN114546229 A CN 114546229A CN 202210043272 A CN202210043272 A CN 202210043272A CN 114546229 A CN114546229 A CN 114546229A
Authority
CN
China
Prior art keywords
window
user
target
information
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210043272.4A
Other languages
Chinese (zh)
Other versions
CN114546229B (en
Inventor
史文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210043272.4A priority Critical patent/CN114546229B/en
Publication of CN114546229A publication Critical patent/CN114546229A/en
Application granted granted Critical
Publication of CN114546229B publication Critical patent/CN114546229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides an information processing method, a screen capturing method and electronic equipment. The information processing method comprises the following steps: responding to a first operation triggered by a user when browsing an application program page, and displaying a plurality of information acquisition mode options on an interface; wherein the page has at least one window; determining a target option selected by a user in the multiple information acquisition mode options; responding to a second operation of the user on a target window displaying the first multimedia information, and determining a target area; acquiring the content of the first multimedia information displayed in the target area according to an information acquisition mode corresponding to the target option; and generating second multimedia information based on the acquired content. The scheme provided by the embodiment of the application integrates the corresponding functions of various information acquisition modes, the operation of a user is convenient, learning is not needed, and the information acquisition efficiency is high.

Description

Information processing method, screen capturing method and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to an information processing method, a screen capture method, and an electronic device.
Background
Dynamic contents such as short videos, long videos, animations and expressions are more and more abundant, and users can see interesting video clips, animations and expressions and the like and want to save or share the videos, the animations and the expressions to friends and the like. For example, when the user sees a picture that the user wants to save, the user can trigger screen capturing through a function control provided by the mobile phone system or a combination key (such as a power-on key and a volume key that are pressed simultaneously) on the mobile phone to capture a picture displayed on the screen. When a user sees a video to be recorded, the user can trigger the screen recording through a function control provided by the mobile phone system so as to record a video picture on the screen. The user may then use specialized software to process the captured picture or recorded video, such as cropping, clipping, etc. Therefore, in the prior art, the operation process of the user is complex, and the use is inconvenient.
Disclosure of Invention
In view of the problems in the prior art, embodiments of the present application provide an information processing method, a screen capture method, and an electronic device that are simple and convenient to operate.
Specifically, in one embodiment of the present application, an information processing method is provided. The method comprises the following steps:
responding to a first operation triggered by a user when browsing an application program page, and displaying a plurality of information acquisition mode options on an interface; wherein the page has at least one window;
determining a target option selected by a user in the multiple information acquisition mode options;
responding to a second operation of the user on a target window displaying the first multimedia information, and determining a target area;
acquiring the content of the first multimedia information displayed in the target area according to an information acquisition mode corresponding to the target option;
and generating second multimedia information based on the acquired content.
In another embodiment of the present application, a screen capture method is provided. The screen capture method comprises the following steps:
responding to the operation triggered by a user when browsing the application program page, and displaying various screen capture mode options on an interface; wherein the page has at least one window;
determining a target option selected by a user in the plurality of screen capture mode options;
highlighting a screen capture area in a target window determined by a user on the interface; wherein, the target window displays first multimedia information;
intercepting the content of the first multimedia information displayed in the screen intercepting region according to the screen intercepting mode corresponding to the target option;
second multimedia information is generated based on the intercepted content.
In an embodiment of the present application, an electronic device is provided. The electronic device comprises a processor and a memory, wherein the memory is used for storing one or more computer instructions; the processor, coupled with the memory, is configured to execute the one or more computer instructions to implement the steps in the above-mentioned information processing method embodiment or the steps in the above-mentioned screen capture method embodiment.
In an embodiment of the present application, a computer program product is provided. The computer program product comprises computer programs or instructions which, when executed by a processor, cause the processor to carry out the steps in the above-mentioned information processing method embodiment or the steps in the above-mentioned screen capturing method embodiment.
Embodiments of the present application provide a computer-readable storage medium storing a computer program, where the computer program can implement the method steps or functions provided by the above embodiments when executed by a computer.
According to the technical scheme provided by the embodiment of the application, various information acquisition functions are added in an application program (APP), a user can trigger acquisition in the application program in the process of browsing an application program page, if the user sees favorite images, texts, videos and other information on an interface, and one information acquisition mode can be selected from various information acquisition modes to acquire the content in a target area on a certain target window on the interface; after the content in the target area desired by the user is acquired, the application program can also automatically generate second multimedia information based on the acquired content. In the whole process, the user does not need to leave the application program, and the selection of the information acquisition mode, the information acquisition and the information storage can be completed in the application program without other third-party software. Therefore, according to the scheme provided by the embodiment of the application, the user operation is convenient, learning is not needed, and the information acquisition efficiency is high. In addition, the scheme provided by the embodiment of the application integrates the functions corresponding to various information acquisition modes, and a user does not need to install various functional software on a client side and jump between the various functional software and an application program.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram illustrating a client implementing an information processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an information processing method according to an embodiment of the present application;
fig. 3 is a schematic view of an operation interface corresponding to an information processing method according to an embodiment of the present application;
fig. 4 is a schematic view of an operation interface corresponding to an information processing method according to another embodiment of the present application;
fig. 5 is a schematic diagram of an audio segment intercepted by acquiring text information in a target region in an information processing method according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating that a part of video content and a part of audio content in first multimedia information are obtained to obtain second multimedia information in an information processing method according to an embodiment of the present application;
fig. 7 is an exemplary diagram illustrating two deformable frames disposed on an interface in an information processing method according to an embodiment of the present application;
fig. 8 is a schematic flowchart of an information processing method according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an information processing apparatus according to another embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are used merely to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different. In addition, the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The information processing method and the screen capture method provided in the following embodiments of the present application may be applied to a client, and the client 11 may be any electronic device having a network connection function. As shown in fig. 1, the client 11 may be a mobile device such as a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PAD), an intelligent wearable device, or a fixed device such as a desktop computer or a Digital TV. The client 11 has an application program (APP) installed thereon, such as an e-commerce application, a social application, an instant messaging application, a video application, an audio application, and the like. For example, as shown in fig. 1, the client 11 is installed with an e-commerce application, and after the user opens the application program on the client, the user can see a commodity picture, a commodity recommendation text, a commodity promotion video, and the like of the e-commerce merchant. For another example, the application is a social application, and after the user opens the social application on the client device, the user can view videos, pictures, texts, music and the like shared by the user on the network side or provided by the social platform. The application program installed on the client has the functions corresponding to the methods provided by the embodiments of the application, and accordingly, after the user sees the favorite pictures, texts or audios and videos, the user does not need to leave the application program, and the user can trigger and acquire the fragments, pictures, local pictures of the pictures, audio fragments and the like of a certain video through the control displayed on the client interface on the application program page.
And the client can download the application program with the function corresponding to the scheme provided by the embodiments of the application from the server side. The application program developer can also continuously add more information acquisition modes in the application program, and after one or more information acquisition modes are newly added, an update data packet of the application program can be generated. The update package of the application is deployed at the server, and the client 11 may actively obtain the update package from the server, or the server sends a notification to the client device when there is an update package, so that the client downloads the update package from the server in time.
The client 11 and the server are connected in communication through a network. In at least one embodiment of the present application, data transmission is performed between the client 11 and the server according to a preset protocol. The preset protocol may include, but is not limited to, any one of the following: an HTTP Protocol (hypertext Transfer Protocol), an HTTPs Protocol (HTTP Protocol targeted for security), and the like. In at least one embodiment of the present application, the server may be a single server, or may be a server group formed by several functional servers, or may be a virtual server or a cloud.
Based on the hardware basis, the present application provides the following method embodiments, apparatuses, and device embodiments to explain the technical solutions provided in the present application in more detail.
Fig. 1 shows a schematic flow chart of an information processing method according to an embodiment of the present application. The execution subject of the method provided by the present embodiment is the client shown in fig. 2. The client may be, but not limited to, a smart phone, a PC (Personal Computer), a mobile Computer, a tablet Computer, a Personal Digital Assistant (PAD), a smart television, a smart wearable device (e.g., a smart band, a smart wearable device embedded in a clothing accessory such as a garment or a shrinking accessory, etc.), and is not limited herein. As shown in fig. 1, the information processing method includes the steps of:
101. responding to a first operation triggered by a user when browsing an application program page, and displaying a plurality of information acquisition mode options on an interface; wherein the page has at least one window.
102. And determining a target option selected by the user in the multiple information acquisition mode options.
103. And determining the target area in response to a second operation of the user on a target window displaying the first multimedia information.
104. And acquiring the content of the first multimedia information displayed in the target area according to the information acquisition mode corresponding to the target option.
105. And generating second multimedia information based on the acquired content.
In the foregoing 101, the application program may be an e-commerce application program (APP), a social contact application program, a video application program, a music application program, and the like, which is not limited in this embodiment. The multiple information acquisition modes may include, but are not limited to: static acquisition mode, dynamic acquisition mode, etc. The static acquisition mode can be further refined into various modes, such as acquisition according to a specified area, acquisition according to a specified object (such as specified acquisition of a human figure, an animal figure and a commodity image), and the like. Similarly, the dynamic acquisition mode may be further refined into multiple modes, for example, dynamic acquisition according to a specified region, and dynamic acquisition according to a specified object (for example, specifying acquisition of a human figure, an animal figure, and a commodity figure).
Wherein, obtaining according to the designated area can be simply understood as: content displayed in a fixed area or an application default area designated by a user is acquired. Obtaining per a given object can be understood simply as: the method comprises the steps of identifying a user specified object (person, animal, commodity and the like) in first multimedia information displayed in a target window, and acquiring image information corresponding to the specified object according to an identification result.
In an implementable embodiment, the steps related to the user interaction on the front end side corresponding to step 101 in this embodiment can be seen as follows. As shown in fig. 3, step 101, "in response to a first operation triggered by a user while browsing an application page, displaying a plurality of information acquisition mode options on an interface" includes:
1011. an information acquisition control is displayed on the interface;
1012. responding to the first operation that a user touches the information acquisition control, and displaying a popup window floating on the application program page on the interface;
1013. and the popup window displays the multiple information acquisition mode options.
Referring to the interface a shown in fig. 3, an information obtaining control (not specifically shown) may be displayed on the interface a. The first operation of the user touching the information acquisition control may include, but is not limited to: click operations, slide operations, and the like. Or, there may be no information acquisition control on the interface a that can be seen by the user; for example, two fingers of the user press the screen at the same time, or one finger of the user double-clicks at any position of the interface a, or the two fingers of the user slide towards each other, and the like, to trigger the display of the popup window 2 floating on the application page. The example shown in fig. 3 is a case where the method provided by the embodiment of the present application is applied to a screenshot scenario. That is, the static information obtaining manner in this embodiment may include a picture screen capturing manner; the dynamic information acquisition mode may include a dynamic screen capture mode. In essence, the information obtaining method in this embodiment is not limited to screen capturing, and may further include: text content recognition acquisition, text segment content recognition extraction, audio segment information acquisition, video framing, etc., while fig. 3 only shows the case of screen shots.
As shown in fig. 3, as long as the user clicks one option in the popup window, the clicked option is the selected target option. That is, step 102 in this embodiment may specifically be: and responding to the selection operation of the user for the multiple information acquisition mode options, and taking the option selected by the user as the target option.
An application page may include only one window, as shown in FIG. 4; multiple windows may also be included as shown in fig. 3.
In the above 103, the target window may be one of the at least one window. The target window can be specified by the user through interface operation or can be automatically determined by the system. For example, the information acquisition mode corresponding to the target option selected by the user is suitable for video information, and at this time, only one window in the multiple windows displays video information, and the system can automatically determine the window as the target window. For another example, the user may specify the target window by moving a deformable box on the interface. As shown in the interface b of fig. 3, the deformable frame 3 is moved by the user, and the window corresponding to the position where the deformable frame 3 stays after the movement is the target window specified by the user. For another example, a user triggers a click operation on a certain window, and the clicked window can be regarded as a target window designated by the user; etc., which are not listed in the present embodiment. The first multimedia information may include, but is not limited to, at least one of the following: pictures, text, pictures, video, and the like. The second operation on the target window may include: region selection operation, region confirmation operation, and the like. The target area may be a local area on the target window or an area within the target window.
Various information acquisition methods based on the above description may include, but are not limited to: static acquisition mode and dynamic acquisition mode. Correspondingly, in step 104, "obtaining the content of the first multimedia information displayed in the target area according to the information obtaining manner corresponding to the target option" in this embodiment may specifically include:
1041. when the information acquisition mode corresponding to the target option is a static acquisition mode, acquiring the content of the first multimedia information presented in the target area at a moment according to the static acquisition mode;
1042. and when the information acquisition mode corresponding to the target option is a dynamic acquisition mode, continuously acquiring the content of the first multimedia information presented in the target area at different moments according to the parameters set in the dynamic acquisition mode.
In some embodiments, the format of the video file is generally in the form of a header, followed by a list of image frames. And the data of the image frame is restored into a bitmap and then copied to a display card of the client device for display. The process is reversible, that is, a bitmap is also converted into a frame in a video format (similar to a decoder and an encoder) in a way that the bitmap of a certain block area (i.e. a target area in the embodiment) on the whole screen or on the screen interface can be intercepted at intervals; and then storing (such as caching) the intercepted bitmap and the time and other information corresponding to each bitmap. That is, the content acquired in this embodiment may be the time when the bitmap and each bitmap correspond to each other captured at different times. In step 105 of this embodiment, continuous frames in the video file may be constructed based on the obtained content (the time corresponding to the bitmap and each bitmap are captured at different times), and the corresponding header information may be reconstructed to generate the second multimedia information (or file). The time interval here may be set manually, and may be a default value, such as 20 milliseconds, 10 milliseconds, and the like. Or, in other realizable embodiments, the content in the target area may not be acquired in the screen capture mode, or the corresponding content may be directly copied from the memory or the memory of the video card
The step 105 of generating the second multimedia information based on the obtained content may include the following steps:
1051. displaying a plurality of storage format options on the interface;
1052. responding to the selection operation triggered by the user aiming at the multiple storage format options, and determining a target storage format;
1053. and storing the second multimedia information generated based on the acquired content by adopting the target storage format.
Continuing with the example shown in FIG. 3, as interface C in FIG. 3, a floating window 4 is displayed on the interface above the application page, and a plurality of save format options are displayed in the floating window 4. For the content acquired in the dynamic information acquisition manner, the various storage formats displayed in the floating window 4 may include, but are not limited to: a GIF (Graphics Interchange Format), mp4(moving picture experts group 4, a technology defined by moving picture experts group), and the like, as shown in fig. 3. For the content acquired in the static information acquisition manner, the various storage formats displayed in the floating window 4 may include, but are not limited to: jpeg (joint Photographic Experts group) format, PNG (Portable Network Graphics) format, and the like.
The user can complete the selection operation by checking behind the corresponding option and clicking the 'confirm' control.
FIG. 4 shows a page with a view on it. Similarly, fig. 4 is also an example of a screen shot scene. An information acquisition control, namely a screen capture control in fig. 4, is displayed on the interface. After the user touches the screen capture control, a popup window 2 is displayed on the interface, and various information acquisition modes such as picture screen capture and dynamic screen capture are displayed in the popup window 2. And determining the dynamic screen capture option as a target option in response to the selection operation triggered by the user aiming at the multiple information acquisition modes. And determining the target area according to the operation (such as the operation of adjusting the deformable frame 3) of the user on the window. Specifically, the target area is an area framed by the deformable frame 3. If the target area is determined, the user can click a 'confirmation' control below the deformable frame 3 to start to acquire the content in the target area in a dynamic screen capture mode. After the content acquisition action is started, the user can click a 'confirmation' control below the deformable frame 3 to trigger the stop of the acquisition of the content. Or, after the user clicks the "confirm" control below the deformable frame 3 to start acquiring the content in the target area in the dynamic screen capture manner, the control below the deformable frame 3 may be changed to a "stop" control (not shown in fig. 4), and the user may click the "stop" control to trigger the acquisition of the stop content. After stopping, as shown in fig. 4, a floating window 4 is displayed on the interface, and a plurality of storage format options are displayed in the floating window 4. Likewise, the user may select the save format through a checkup operation. After the user finishes checking and clicks the 'determination' control, the second multimedia information generated based on the acquired content can be stored in the storage format selected by the user.
According to the technical scheme provided by the embodiment, various information acquisition functions are added in an application program (APP), so that a user can trigger acquisition in the application program in the process of browsing an application program page, if the user sees information such as favorite pictures, texts and videos on an interface, and can select one information acquisition mode from various information acquisition modes to acquire the content in a target area on a certain target window on the interface; after the content in the target area desired by the user is acquired, the application program can also automatically generate second multimedia information based on the acquired content. In the whole process, the user does not need to leave the application program, and can complete the selection of the information acquisition mode, the information acquisition and the storage in the application program without other third-party software. Therefore, according to the scheme provided by the embodiment, the user operation is convenient, learning is not needed, and the information acquisition efficiency is high. In addition, the scheme provided by the embodiment integrates the functions corresponding to the multiple information acquisition modes, so that a user does not need to install multiple functional software on a client side and jump between the multiple functional software and an application program.
Further, in the information processing method provided in this embodiment, the first multimedia information includes display information for displaying on the target window, and audio information associated with the display information. The display information for displaying on the target window may include, but is not limited to: pictures, text, icons, motion pictures, videos, etc. Correspondingly, in this embodiment, the method may further include:
106. intercepting an audio segment between a first moment and a second moment in the audio information so as to generate the second multimedia information according to the acquired content and the audio segment; the first moment is the moment when the content presented in the target area is started to be acquired, and the second moment is the moment when the content presented in the target area is stopped to be acquired; or
107. And intercepting an audio segment which is adapted to the acquired content in the audio information according to the acquired content.
The scheme corresponding to the step 106 may specifically be:
1061. and responding to a starting acquisition instruction triggered by the user for the content in the target area, determining the first time, and triggering step 104 to acquire the content of the first multimedia information displayed in the target area according to the information acquisition mode corresponding to the target option.
1062. And responding to a user triggering an acquisition stopping instruction for the content in the target area, determining the second moment, and stopping executing the step 104, namely acquiring the content of the first multimedia information displayed in the target area according to the information acquisition mode corresponding to the target option.
Specifically, in an implementable embodiment, the step 107 "intercepting, according to the obtained content, an audio segment in the audio information adapted to the obtained content" may include:
1071. and if the acquired content is a single image, identifying text information in the image, and intercepting an audio segment from audio information based on the identified text information.
This step 1071 is suitable for the example shown in fig. 5, for example, the application program in this embodiment is a music application, the user clicks the play control corresponding to a song, the application plays the music, and the lyrics text shown in fig. 5 is displayed in the lyrics interface. In fig. 5, the words in the lyric text are illustrated by lines of different lengths. And displaying each sentence in the lyric text in a display area according to the fluidity of the music playing progress, wherein the sentences and words in the sentences corresponding to the current audio playing progress are highlighted, namely the display of the lyric text is synchronous with the playing of the audio information. Therefore, the lyric text can be adapted to the streaming display of the playing progress of the audio information, and the words in the sentences and the sentences can be highlighted in sequence along with the playing progress of the audio information, because each sentence in the lyric text corresponds to an associated audio frame in the audio information. Based on this feature, the embodiment can identify the lyric sentence by recognizing the text information in the image (e.g. the image in the target area determined by the deformable box 3 in fig. 5). Assuming that 5 lyric sentences are recognized as shown in fig. 5, the audio segments corresponding to the 5 lyric sentences are intercepted from the audio information according to the corresponding relationship between the sentences and the audio frames.
In the prior art, if a user wants to intercept an audio segment, the user needs to download the audio information and then quit the application program; and then starting audio clipping software of a third party, and clipping out the wanted audio segment by using the audio clipping software. In this embodiment, if the user especially likes an audio segment corresponding to a certain paragraph or several sentences of lyrics, the audio segment corresponding to the lyrics paragraph or several sentences of lyrics can be automatically cut out by cutting out the lyrics text on the interface. The intercepted audio segments can be shared with friends, or as background music for homemade short videos, and the like.
1072. If the obtained content is a plurality of continuous images, determining video frames corresponding to a first image and a last image in the plurality of continuous images respectively, and intercepting corresponding audio segments from audio information synchronously played with the video information according to the video frame corresponding to the first image and the video frame of the last image.
This step 1072 is suitable for the example shown in fig. 6, where the first multimedia information in this embodiment includes audio information and video information. Video frames in the video information are played in the target window through a display screen of the client, and the audio information is played through an audio playing device of the client and is synchronous with the video information. If the start and stop of obtaining the content in the target area are both triggered by the user through the operation on the interface, the audio segment can be directly intercepted through the step 106. In addition, in the solution provided in this application, the information acquisition data size may be set in the information acquisition mode corresponding to the target option, so that the content of the first multimedia information displayed in the target area may be acquired according to the information acquisition mode, and the information acquisition is finished after the information data acquisition size is reached. For example, the information acquisition mode corresponding to the target option sets the number of frames (e.g., 30 frames, 60 frames, etc.) for acquiring video frames or sets the acquisition duration (e.g., 30 seconds, 1 minute, etc.). At this time, the user is not required to trigger the action of stopping the acquisition. Because there is no instruction of stopping acquisition by the user, in the implementation, the audio segment between two corresponding frames can be intercepted from the audio information according to the acquisition of the video frame corresponding to the first image and the video frame corresponding to the last image in the plurality of continuous images.
Further, as shown in fig. 3 and fig. 4, step 103 "determining the target area in response to the second operation of the user on the target window displaying the first multimedia information" in this embodiment may specifically be:
1031. and a deformable frame is displayed on the target window.
1032. And displaying the content in the deformable frame area and the content outside the deformable frame in a distinguishing manner.
1033. In response to the second operation of changing the deformable frame and/or confirming the deformable frame by the user, determining the region framed by the deformable frame as the target region.
As shown in fig. 3 and 4, the deformable frame may be rectangular, circular, oval, trapezoidal, pentagonal, hexagonal or more, and this embodiment is not limited thereto. The edge of the deformable frame is provided with a plurality of adjusting controls, such as the square controls in fig. 3 and 4. When a user drags any one of the adjusting controls, the deformable frame can be automatically deformed according to the dragging direction, distance and the like of the user. For example, as shown in fig. 3 and 4, the deformable frame is a rectangle, and an adjustment control is disposed at each of four corners of the rectangle. The user can drag the adjustment control at any one angular position.
If the size, shape and position of the deformable frame are right and the user wants, the user does not need to adjust the deformable frame, and the user can trigger the confirmation of the current deformable frame through a confirmation control on the interface.
If one of the size, the shape and the position of the deformable frame is not appropriate, the user can change and move the deformable frame, and after the deformable frame meets the user requirement after corresponding operation, the user triggers the confirmation of the current deformable frame through a confirmation control on the interface.
Of course, the deformable frame in this embodiment may also be in an irregular shape, a plurality of adjustment controls (or movable boundary points) may be disposed on the frame edge line of the deformable frame, and the user may change the shape and size of the deformable frame by adjusting the adjustment controls.
Still further, as shown in fig. 3 and 4, the method provided by this embodiment may further include the following steps:
108. and acquiring at least one window presented on the interface after the target option is selected by the user.
109. Among the at least one window, a candidate window is determined.
110. And displaying a deformable frame matched with the candidate window at the position of the candidate window.
111. And if the operation that the user moves the deformable frame on the interface is monitored, moving the deformable frame to a target window adjacent to the candidate window according to the direction of the user movement operation, and adapting to the target window.
As shown in FIG. 3, a page has multiple windows. After the user selects the target option, i.e. the interface in fig. 3, in order to simplify the user operation, a candidate window may be pre-selected, and then a deformable frame adapted to the candidate window may be displayed at the position of the candidate window. If the candidate window is the target window desired by the user, the user does not need to move the deformable frame. If the candidate window is not the user's intended target window, the user moves the deformable frame. The shapes and/or sizes of the various windows on a page in this embodiment may be different. After the candidate window is determined, a deformable frame matched with the candidate window can be automatically displayed on the candidate window, so that the operation of a user can be reduced, and the trouble is avoided. It is assumed that each displayed deformable frame is a fixed deformable frame, such as a deformable frame with the same shape and size at a fixed position of the interface. Because the typesetting of the windows is not fixed, the deformable frame is not on any window, but a plurality of windows are collapsed, and a user needs to move the deformable frame and then adjust the shape and the size of the deformable frame. After the function of the automatic adaptive window is added, the deformable frame can be automatically adjusted according to the shape, size, position and the like of the window at the position of the deformable frame, if a user only wants to intercept the content of a part of regions in the window, the adjustment amount of the deformable frame is small, the large adjustment amount is not needed, and the method is simple and rapid.
The candidate windows in 109 may be randomly selected or determined according to some related information. For example, in the present embodiment, the step 109 "determining a candidate window in the at least one window" may specifically be:
109a, determining one window in the middle of the interface in the at least one window as the candidate window; or
109b, determining a candidate window in said at least one window based on said target option selected by said user; or
109c, obtaining the associated information of the user, and determining a candidate window in the at least one window according to the associated information.
For example, in the case shown in fig. 3, a plurality of windows are displayed on the interface, and two windows are located in the middle of the interface, namely, a first window 1a on the left and a second window 1b on the right. At this time, either one of the two views may be determined as a candidate view.
109b, the plurality of information obtaining methods include: static acquisition mode and dynamic acquisition mode. The static acquisition mode is suitable for acquiring picture information, and the dynamic acquisition mode is suitable for acquiring information such as videos and motion pictures. Therefore, in specific implementation, the candidate window can be determined according to the information acquisition mode corresponding to the target option selected by the user. For example, in the view window in the current interface in fig. 3, a commodity picture of a first commodity is displayed in the first view window 1a, and a model try-on picture or a short video of a second commodity is displayed in the second view window 1 b. If the target option selected by the user is a static acquisition mode, it is probably the case that the user wants to acquire a picture. At this time, the first window 1a may be determined as a candidate window. If the target option selected by the user is a dynamic acquisition mode, the user may want to acquire a dynamic image or a short video in a high probability, and the second window 1b may be determined as a candidate window.
Step 109b, determining a candidate window in the at least one window based on the target option selected by the user, may include:
if the information acquisition mode corresponding to the target option is a dynamic acquisition mode, determining one window showing a dynamic image or video in the at least one window as the candidate window;
and if the information acquisition mode corresponding to the target option is a static acquisition mode, determining one window with texts and/or pictures in the at least one window as the candidate window.
Alternatively, as in step 109c above, the candidate windows may also be determined based on user preferences. For example, the associated information of the user, such as the operation behavior information of the user on the application program in the historical period, is obtained. In a specific implementation, a window that may be preferred by a user may be determined as a candidate window in the at least one window according to the association information.
For example, a user historically obtains videos of a certain type of goods (e.g., washing products) many times, and if a window on the interface displays the videos of the certain type of goods (e.g., washing products), the window can be used as a candidate window.
The step 109a has a certain randomness and the steps 109b and 109c determine or guess the user's needs based on some information. Of course, when there is only one window on the interface, the window is a candidate window.
In addition, in the step 110, at the position of the candidate window, the displayed deformable frame adapted to the candidate window may be a deformable frame completely attached to the boundary of the candidate window, or a deformable frame that is inwardly and proportionally shrunk by a set size along the boundary of the candidate window, and the like, which is not limited in this embodiment.
Further, in the technical solution provided in this embodiment, when information is acquired once, the content in the corresponding target area in the multiple target windows may also be intercepted at the same time, as shown in fig. 7. The method can be implemented, for example, a plurality of screen capture mode options and configuration frames corresponding to the options are displayed on an interface. The user can directly fill parameters in the configuration box, and can select the needed parameters from the displayed pull-down menu after clicking the configuration box. For example, in the example shown in fig. 7, two options are displayed in the floating window, namely picture screenshot and dynamic screenshot. The configuration frame corresponding to the picture screenshot can include but is not limited to: the number (which may correspond to the number of deformable frames) is acquired at one time. The configuration box corresponding to the dynamic screen capture may include, but is not limited to: acquisition frame frequency, acquisition duration (e.g., 30s, 1 minute, etc.), total number of frames acquired, etc.
Referring to fig. 7, if the user selects the picture screenshot and fills in "2" in the acquired frames, after the user clicks "confirm" on the floating window, the interface is displayed as the right interface in fig. 7, and two deformable frames 3 adapted to the respective corresponding candidate windows are respectively displayed at the two candidate windows on the interface. Therefore, after the user adjusts the two deformable frames 3 once, the content in the target area framed by the deformable frames 3 is respectively intercepted according to the picture screenshot mode selected by the user, and thus the user can obtain two screenshots once.
Still further, the image capture and the dynamic capture, which are the configuration frames corresponding to the two capture modes, may further include: and intercepting the object parameters. For example, if the user selects the picture screenshot and fills in the "character" in the parameters of the screenshot object, and the user clicks "confirm" on the floating window, one or more deformable frames adapted to the outline of the character in the candidate window are displayed in a candidate window (the content displayed in the candidate window needs to include the character) on the interface. The number of deformable frames may be the same as the number of people displayed within the candidate window. For the client, after the user clicks "confirm", a window whose display content contains a character is selected as a candidate window, then the character outline in the candidate window is identified, and a deformable frame which is adapted to the outline is generated according to the identified outline.
Or, after the user clicks "confirm", the interface is displayed as shown in fig. 3, fig. 4, or fig. 7, a deformable frame adapted to the candidate window is displayed on the candidate window, and after the user confirms that the candidate window is the target window, the action of "recognizing the character outline in the candidate window, and generating the deformable frame adapted to the outline according to the recognized outline" may be triggered.
Of course, if the user does not configure the parameters of the intercepted object, the deformable frame displayed on the candidate window may be some regular shape, such as the rectangle, circle, ellipse, pentagon, etc. mentioned above.
Still further, the generated second multimedia information may be stored locally at the client, such as in a photo album or a user-specified storage area. In addition, the user can share the second multimedia information to a certain network side friend in the third-party application through a sharing function provided by the application program.
The above example exemplifies the case where the deformable frame is automatically generated, and in essence, the deformable frame in the present embodiment can be drawn by the user himself. The present embodiment is not limited to the specific implementation of manually drawing the deformable frame on the interface by the user.
In addition, in the technical scheme provided by this embodiment, after the task of acquiring the content of the target area is completed and before the content is stored, a preview link may be added. That is, after the task of acquiring the content of the target area is completed, the acquired content is displayed or played on the interface, and after the user confirms the acquired content, the second multimedia information is stored based on the acquired content. If the user is not satisfied with clicking the 'cancel', returning to displaying the interface with the deformable frame, so that the user can readjust the deformable frame to acquire the content.
That is, the information processing method provided in this embodiment may further include the following steps:
responding to a confirmation instruction of a user for the target area, and triggering the step of acquiring the content of the first multimedia information displayed in the target area according to the information acquisition mode corresponding to the target option;
stopping the obtaining step (i.e. step 104 in this embodiment) in response to an interception stop instruction triggered by a user, and displaying preview information generated according to the intercepted content;
if the confirmation operation triggered by the user aiming at the preview information is monitored, triggering the step of displaying various storage format options on the interface;
and if the condition that the user triggers the cancel operation aiming at the preview information is monitored, returning to the interface on which the deformable frame is displayed.
Fig. 8 shows a flowchart of a screen capture method provided in another embodiment of the present application. As shown in fig. 8, the screen capture method includes:
201. responding to the operation triggered by a user when browsing the application program page, and displaying various screen capture mode options on an interface; wherein the page has at least one window.
202. And determining the target option selected by the user in the plurality of screen capture mode options.
203. Highlighting a screen capture area in a target window determined by a user on the interface; and first multimedia information is displayed in the target window.
204. And intercepting the content of the first multimedia information displayed in the screen intercepting region according to the screen intercepting mode corresponding to the target option.
205. Second multimedia information is generated based on the intercepted content.
In 201, the plurality of screen capturing modes may include, but are not limited to: picture screen capturing and dynamic screen capturing. Of course, the picture screenshot may also include:
in 203, the target window determined by the user may be one, two or more, which is not limited in this embodiment.
Further, in this embodiment, the multiple screen capturing modes include: picture screen capturing mode and dynamic screen capturing mode. Correspondingly, the step 204 of capturing the content of the first multimedia information displayed in the screen capture area according to the screen capture mode corresponding to the target option includes:
2041. when the screen capture mode corresponding to the target option is picture screen capture, capturing the content of the first multimedia information presented in the target area at a moment according to the picture screen capture mode;
2042. and when the screen capture mode corresponding to the target option is a dynamic screen capture mode, continuously capturing the content of the first multimedia information presented in the target area at different moments according to the screen capture frequency set in the dynamic screen capture mode.
Further, step 203 "highlighting the screenshot area in the target window determined by the user on the interface" in the present embodiment includes:
2031. after the user selects the target option, acquiring at least one window presented on the interface;
2032. determining a candidate window in the at least one window;
2033. displaying a deformable frame matched with the candidate window at the position of the candidate window;
2034. if the operation that the user moves the deformable frame on the interface is monitored, determining a window corresponding to the position where the deformable frame stays after moving as a target window; if the operation that the user moves the deformable frame is not monitored, the candidate window is the target window;
2035. highlighting content within the deformable frame area;
2036. and responding to the operation that the user changes the deformable frame and/or confirms the deformable frame, and determining the area framed by the deformable frame as the screen capture area.
The above "highlight" display may be specifically but not limited to: the content outside the deformable frame area is displayed with a darkened brightness, a grayed display, a blurred display and the like, and the content inside the deformable frame area is normally displayed so as to highlight the content inside the deformable frame area. Or the content in the deformable frame area is highlighted, and the content outside the deformable frame area is normally displayed. Or, the content in the deformable frame region is displayed with a special effect, for example, with a stereoscopic effect, so as to highlight the content in the deformable frame region.
In this embodiment, the step 205 of generating the second multimedia information based on the intercepted content may include:
2051. displaying a plurality of storage format options on the interface;
2052. responding to the selection operation triggered by the user aiming at the multiple storage format options, and determining a target storage format;
2053. and saving the second multimedia information generated based on the intercepted content by adopting the target saving format.
Further, the method provided by the embodiment of the present application may further include the following steps:
206. responding to a confirmation instruction of a user for the screen capture area, and triggering the step of capturing the content of the first multimedia information displayed in the screen capture area according to the screen capture mode corresponding to the target option;
207. stopping the intercepting step in response to an intercepting stop instruction triggered by a user, and displaying preview information generated according to the intercepted content;
208. if the confirmation operation triggered by the user aiming at the preview information is monitored, triggering the step of displaying various storage format options on the interface;
209. and if the condition that the user triggers the cancel operation aiming at the preview information is monitored, returning to the interface with the screen capture area highlighted.
For the specific implementation of the above steps, reference may be made to the relevant contents of the above embodiments and the accompanying drawings, which are not described herein again. The main application scenario in this embodiment is a screen capture scenario, and the applicable scenarios in the above embodiment may include: video recording, video capture, picture partial content copying, picture capture, and the like. The "acquisition" in the above-described information processing method embodiment may be replaced with "interception".
The following describes a user operation process corresponding to the scheme provided in the embodiments of the present application with reference to a specific example. As shown in fig. 3 and 4, the user a uses his/her client (e.g., a laptop, a mobile phone, etc.) to watch an audio/video (e.g., a tv drama, a movie). When watching the currently played video content, the user A feels that the currently played content is very interesting, and at the moment, the user A can click the information acquisition control on the application program page and then jump out of the interface to display various information acquisition mode options. After the user selects the target option, a third interface in the sequence indicated by the arrow in interface b in fig. 3 and fig. 4 is displayed, and a deformable frame is displayed on the target window. The user can move the deformable frame and also can change the shape, size and the like of the deformable frame. And after the user finishes changing the deformable frame and/or confirming the deformable frame, determining the area framed by the deformable frame as the target area. At this time, the user can click to start acquiring, and after the client monitors that the user confirms to start acquiring the information, the application program can acquire the content in the target area according to the information acquisition mode corresponding to the target option selected by the user. After the user triggers the instruction to stop obtaining or finishes obtaining the content according to the information obtaining mode corresponding to the target option, the preview information corresponding to the obtained content can be displayed on the interface, and after the user triggers and confirms the preview information, various storage mode options can be displayed on the interface for the user to select. After the user selects and confirms a target storage mode, the application program stores the second multimedia information generated based on the acquired content according to the target storage mode selected by the user.
Fig. 9 is a schematic structural diagram illustrating an information processing apparatus according to an embodiment of the present application. As shown in fig. 9, the information processing apparatus includes: the device comprises a display module 21, a determination module 22, an acquisition module 23 and a generation module 24. The display module 21 is configured to display a plurality of information acquisition mode options on an interface in response to a first operation triggered by a user when browsing an application page; wherein the page has at least one window. The determining module 22 is configured to determine a target option selected by the user among the multiple information obtaining mode options, and further configured to determine a target area in response to a second operation performed by the user on a target window in which the first multimedia information is displayed. The obtaining module 23 is configured to obtain, according to an information obtaining manner corresponding to the target option, content of the first multimedia information displayed in the target area. The generating module 24 is configured to generate second multimedia information based on the obtained content.
Further, the multiple information acquisition modes include: static acquisition mode and dynamic acquisition mode. Correspondingly, when the obtaining module 23 obtains the content of the first multimedia information displayed in the target area according to the information obtaining manner corresponding to the target option, the obtaining module is specifically configured to:
when the information acquisition mode corresponding to the target option is a static acquisition mode, acquiring the content of the first multimedia information presented in the target area at a moment according to the static acquisition mode;
and when the information acquisition mode corresponding to the target option is a dynamic acquisition mode, continuously acquiring the content of the first multimedia information presented in the target area at different moments according to the parameters set in the dynamic acquisition mode.
Further, the first multimedia information comprises display information for displaying on the target window and audio information associated with the display information. Correspondingly, the information processing apparatus of this embodiment may further include an intercepting module. The intercept module is to:
intercepting an audio segment between a first moment and a second moment in the audio information so as to generate the second multimedia information according to the acquired content and the audio segment; the first moment is the moment when the content presented in the target area is started to be obtained, and the second moment is the moment when the content presented in the target area is stopped to be obtained; or
And intercepting an audio segment which is adapted to the acquired content in the audio information according to the acquired content.
Further, when the display module 21 responds to a first operation triggered by the user when browsing the application page and displays a plurality of information acquisition mode options on the interface, the display module is specifically configured to:
an information acquisition control is displayed on the interface; responding to the first operation that a user touches the information acquisition control, and displaying a popup window floating on the application program page on the interface; and the popup window displays the multiple information acquisition mode options.
Further, the determining module 22, when determining the target area in response to a second operation of the user on a target window displaying the first multimedia information, is specifically configured to:
displaying a deformable frame on the target window; highlighting content in the deformable frame area; and in response to the second operation of changing the deformable frame and/or confirming the deformable frame by the user, determining the area framed by the deformable frame as the target area.
Further, in the information processing apparatus provided in this embodiment, the obtaining module 23 is further configured to obtain at least one window presented on the interface after the user selects the target option. The determining module 22 is further configured to determine a candidate window among the at least one window. The display module 21 is further configured to display a deformable frame adapted to the candidate window at the position of the candidate window; and if the operation that the user moves the deformable frame on the interface is monitored, moving the deformable frame to a target window adjacent to the candidate window according to the direction of the user movement operation, and adapting to the target window.
Further, when the determining module 22 determines a candidate window in the at least one window, it is specifically configured to:
determining one window in the middle of the interface in the at least one window as the candidate window; or
Determining a candidate window in the at least one window according to the target option selected by the user, wherein multimedia information suitable for adopting an information acquisition mode corresponding to the target option is displayed in the candidate window; or
And acquiring the associated information of the user, and determining a candidate window in the at least one window according to the associated information.
Further, when the generating module 24 in this embodiment generates the second multimedia information based on the acquired content, specifically configured to:
displaying a plurality of storage format options on the interface; responding to selection operation triggered by the user aiming at the multiple storage format options, and determining a target storage format; and storing the second multimedia information generated based on the acquired content by adopting the target storage format.
Here, it should be noted that: the information processing apparatus provided in the foregoing embodiment may implement the technical solutions described in the foregoing embodiments of the information processing method, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing embodiments of the information processing method, which is not described herein again.
Fig. 10 shows a schematic structural diagram of a screen capture device provided in another embodiment of the present application. As shown in fig. 10, the screen capture apparatus includes: a display module 31, a determination module 32, an interception module 33 and a generation module 34. The display module 31 is configured to display a plurality of screen capture mode options on an interface in response to an operation triggered by a user when browsing an application page; wherein the page has at least one window. The determining module 32 is configured to determine a target option selected by the user among the plurality of screen capture options. The display module 31 is further configured to highlight, on the interface, a screenshot area in the target window determined by the user; and first multimedia information is displayed on the target window. The intercepting module 33 is configured to intercept, according to a screen capturing manner corresponding to the target option, content displayed in the screen capturing area by the first multimedia information. The generating module 34 is configured to generate the second multimedia information based on the intercepted content.
Further, the multiple screen capturing modes include: picture screen capturing mode and dynamic screen capturing mode. Correspondingly, when the capture module 33 captures the content of the first multimedia information displayed in the screen capture area according to the screen capture mode corresponding to the target option, the capture module is specifically configured to:
when the screen capture mode corresponding to the target option is picture screen capture, capturing the content of the first multimedia information presented in the target area at a moment according to the picture screen capture mode;
and when the screen capture mode corresponding to the target option is a dynamic screen capture mode, continuously capturing the content of the first multimedia information presented in the target area at different moments according to the screen capture frequency set in the dynamic screen capture mode.
Further, when the display module 31 highlights the screen capture area in the target window determined by the user on the interface, it is specifically configured to:
after the user selects the target option, acquiring at least one window presented on the interface; determining a candidate window in the at least one window; displaying a deformable frame matched with the candidate window at the position of the candidate window; if the operation that the user moves the deformable frame on the interface is monitored, determining a window corresponding to the position where the deformable frame stays after moving as a target window; if the operation that the user moves the deformable frame is not monitored, the candidate window is the target window; highlighting content within the deformable frame area; and responding to the operation that the user changes the deformable frame and/or confirms the deformable frame, and determining the area framed by the deformable frame as the screen capture area.
Further, when the generating module 34 generates the second multimedia information based on the intercepted content, it is specifically configured to:
displaying a plurality of storage format options on the interface; responding to the selection operation triggered by the user aiming at the multiple storage format options, and determining a target storage format; and saving the second multimedia information generated based on the intercepted content by adopting the target saving format.
Further, the screen capture device of the embodiment may further include: the device comprises a triggering module and a stopping module. The triggering module is used for responding to a confirmation instruction of a user for the screen capture area, and triggering the capturing module to capture the content of the first multimedia information displayed in the screen capture area according to the screen capture mode corresponding to the target option. The stopping module is used for responding to an interception stopping instruction triggered by a user, stopping the interception module from executing the step of interception, and triggering the display module to display preview information generated according to the intercepted content. The triggering module is further configured to trigger the display module to perform a step of displaying multiple storage format options on the interface if a confirmation operation triggered by the user for the preview information is monitored. The triggering module is further used for triggering the display module to execute a return to the interface with the highlighted screen capture area if it is monitored that the user triggers the cancel operation for the preview information.
Here, it should be noted that: the screen capture device provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing method embodiments, which is not described herein again.
Fig. 11 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes a processor 42 and a memory 41. Wherein the memory 41 is configured to store one or more computer instructions; the processor 42, coupled to the memory 41, is used for one or more computer instructions (e.g., computer instructions implementing data storage logic) to implement the steps in the above-described information processing method embodiments, or the steps in the above-described screen capture method embodiments.
The memory 41 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Further, as shown in fig. 11, the electronic apparatus further includes: communication components 43, power components 45, display 44, and audio components 46. Only some of the components are schematically shown in fig. 11, and the electronic device is not meant to include only the components shown in fig. 11.
Yet another embodiment of the present application provides a computer program product (not shown in the accompanying drawings). The computer program product comprises computer programs or instructions which, when executed by a processor, cause the processor to carry out the steps in the above-described method embodiments.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the method steps or functions provided by the foregoing embodiments when executed by a computer.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (14)

1. An information processing method characterized by comprising:
responding to a first operation triggered by a user when browsing an application program page, and displaying a plurality of information acquisition mode options on an interface; wherein the page has at least one window;
determining a target option selected by a user in the multiple information acquisition mode options;
responding to a second operation of the user on a target window displaying the first multimedia information, and determining a target area;
acquiring the content of the first multimedia information displayed in the target area according to an information acquisition mode corresponding to the target option;
and generating second multimedia information based on the acquired content.
2. The method of claim 1, wherein the plurality of information acquisition modes comprises: a static acquisition mode and a dynamic acquisition mode; and
acquiring the content of the first multimedia information displayed in the target area according to the information acquisition mode corresponding to the target option, wherein the content includes:
when the information acquisition mode corresponding to the target option is a static acquisition mode, acquiring the content of the first multimedia information presented in the target area at a moment according to the static acquisition mode;
and when the information acquisition mode corresponding to the target option is a dynamic acquisition mode, continuously acquiring the content of the first multimedia information presented in the target area at different moments according to the parameters set in the dynamic acquisition mode.
3. The method of claim 1, wherein the first multimedia information comprises display information for display on the target window and audio information associated with the display information; and
the method further comprises the following steps:
intercepting an audio segment between a first moment and a second moment in the audio information so as to generate the second multimedia information according to the acquired content and the audio segment; the first moment is the moment when the content presented in the target area is started to be obtained, and the second moment is the moment when the content presented in the target area is stopped to be obtained; or
And intercepting an audio segment which is adapted to the acquired content in the audio information according to the acquired content.
4. The method according to any one of claims 1 to 3, wherein in response to a first operation triggered by a user while browsing application program pages, a plurality of information acquisition mode options are displayed on the interface, including:
an information acquisition control is displayed on the interface;
responding to the first operation that a user touches the information acquisition control, and displaying a popup window floating on the application program page on the interface;
and the popup window displays the multiple information acquisition mode options.
5. The method of any of claims 1 to 3, wherein determining the target area in response to a second user action on a target window displaying the first multimedia information comprises:
displaying a deformable frame on the target window;
highlighting content in the deformable frame area;
and in response to the second operation of changing the deformable frame and/or confirming the deformable frame by the user, determining the area framed by the deformable frame as the target area.
6. The method of claim 5, further comprising:
after the user selects the target option, acquiring at least one window presented on the interface;
determining a candidate window in the at least one window;
displaying a deformable frame matched with the candidate window at the position of the candidate window;
and if the operation that the user moves the deformable frame on the interface is monitored, moving the deformable frame to a target window adjacent to the candidate window according to the direction of the user movement operation, and adapting to the target window.
7. The method of claim 6, wherein determining a candidate window among the at least one window comprises:
determining one window in the middle of the interface in the at least one window as the candidate window; or
Determining a candidate window in the at least one window according to the target option selected by the user, wherein multimedia information suitable for adopting an information acquisition mode corresponding to the target option is displayed in the candidate window; or alternatively
And acquiring the associated information of the user, and determining a candidate window in the at least one window according to the associated information.
8. The method according to any one of claims 1 to 3, wherein generating second multimedia information based on the obtained content comprises:
displaying a plurality of storage format options on the interface;
responding to the selection operation triggered by the user aiming at the multiple storage format options, and determining a target storage format;
and storing the second multimedia information generated based on the acquired content by adopting the target storage format.
9. A screen capture method, comprising:
responding to an operation triggered by a user when browsing an application program page, and displaying a plurality of screen capture mode options on an interface; wherein the page has at least one window;
determining a target option selected by a user in the plurality of screen capture mode options;
highlighting a screen capture area in a target window determined by a user on the interface; wherein, the target window displays first multimedia information;
intercepting the content of the first multimedia information displayed in the screen intercepting region according to the screen intercepting mode corresponding to the target option;
second multimedia information is generated based on the intercepted content.
10. The method of claim 9, wherein the plurality of screen shots comprises: a picture screen capture mode and a dynamic screen capture mode; and
intercepting the content of the first multimedia information displayed in the screen intercepting region according to the screen intercepting mode corresponding to the target option, wherein the content comprises the following steps:
when the screen capture mode corresponding to the target option is picture screen capture, capturing the content of the first multimedia information presented in the target area at a moment according to the picture screen capture mode;
and when the screen capture mode corresponding to the target option is a dynamic screen capture mode, continuously capturing the content of the first multimedia information presented in the target area at different moments according to the screen capture frequency set in the dynamic screen capture mode.
11. The method of claim 9 or 10, wherein highlighting on the interface a screenshot area within a user-defined target window comprises:
after the target option is selected by the user, acquiring at least one window presented on the interface;
determining a candidate window in the at least one window;
displaying a deformable frame matched with the candidate window at the position of the candidate window;
if the operation that the user moves the deformable frame on the interface is monitored, determining a window corresponding to the position where the deformable frame stays after moving as a target window; if the operation that the user moves the deformable frame is not monitored, the candidate window is the target window;
highlighting content within the deformable frame area;
and responding to the operation that the user changes the deformable frame and/or confirms the deformable frame, and determining the area framed by the deformable frame as the screen capture area.
12. The method according to claim 9 or 10, wherein generating second multimedia information based on the intercepted content comprises:
displaying a plurality of storage format options on the interface;
responding to the selection operation triggered by the user aiming at the multiple storage format options, and determining a target storage format;
and saving the second multimedia information generated based on the intercepted content by adopting the target saving format.
13. The method of claim 12, further comprising:
responding to a confirmation instruction of a user for the screen capturing area, and triggering the step of capturing the content of the first multimedia information displayed in the screen capturing area according to the screen capturing mode corresponding to the target option;
stopping the intercepting step in response to an intercepting stop instruction triggered by a user, and displaying preview information generated according to the intercepted content;
if the confirmation operation triggered by the user aiming at the preview information is monitored, triggering the step of displaying various storage format options on the interface;
and if the condition that the user triggers the cancel operation aiming at the preview information is monitored, returning to the interface with the screen capture area highlighted.
14. An electronic device comprising a processor and a memory, wherein,
the memory to store one or more computer instructions;
the processor, coupled with the memory, configured to execute the one or more computer instructions for implementing the steps of the method of any of claims 1 to 8 or the steps of the method of any of claims 9 to 13.
CN202210043272.4A 2022-01-14 2022-01-14 Information processing method, screen capturing method and electronic equipment Active CN114546229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210043272.4A CN114546229B (en) 2022-01-14 2022-01-14 Information processing method, screen capturing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210043272.4A CN114546229B (en) 2022-01-14 2022-01-14 Information processing method, screen capturing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114546229A true CN114546229A (en) 2022-05-27
CN114546229B CN114546229B (en) 2023-09-22

Family

ID=81671095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210043272.4A Active CN114546229B (en) 2022-01-14 2022-01-14 Information processing method, screen capturing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN114546229B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070294630A1 (en) * 2006-06-15 2007-12-20 Microsoft Corporation Snipping tool
CN102830963A (en) * 2012-06-28 2012-12-19 北京奇虎科技有限公司 Method and system for matching screenshot
CN104133683A (en) * 2014-07-31 2014-11-05 上海二三四五网络科技股份有限公司 Screenshot obtaining method and device
CN104238913A (en) * 2014-09-02 2014-12-24 北京金山安全软件有限公司 Screenshot method and device and electronic equipment
CN104281356A (en) * 2013-07-01 2015-01-14 腾讯科技(深圳)有限公司 Screen sharing method and device
CN104915202A (en) * 2015-06-05 2015-09-16 广东欧珀移动通信有限公司 Screenshot method and device
US20160378297A1 (en) * 2015-06-25 2016-12-29 Medcpu Ltd. Smart Display Data Capturing Platform For Record Systems
CN106468999A (en) * 2016-09-27 2017-03-01 上海斐讯数据通信技术有限公司 A kind of screenshotss method and system
CN106970754A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 The method and device of screenshotss processing
CN107731249A (en) * 2017-09-15 2018-02-23 维沃移动通信有限公司 A kind of audio file manufacture method and mobile terminal
CN110780795A (en) * 2019-10-30 2020-02-11 维沃移动通信有限公司 Screen capturing method and electronic equipment
CN111010610A (en) * 2019-12-18 2020-04-14 维沃移动通信有限公司 Video screenshot method and electronic equipment
CN112114733A (en) * 2020-09-23 2020-12-22 青岛海信移动通信技术股份有限公司 Screen capturing and recording method, mobile terminal and computer storage medium
US20210014431A1 (en) * 2018-10-19 2021-01-14 Beijing Microlive Vision Technology Co., Ltd Method and apparatus for capturing video, electronic device and computer-readable storage medium
CN112379815A (en) * 2020-12-07 2021-02-19 腾讯科技(深圳)有限公司 Image capturing method and device, storage medium and electronic equipment
WO2021082772A1 (en) * 2019-10-28 2021-05-06 维沃移动通信有限公司 Screenshot method and electronic device
CN113552977A (en) * 2020-04-23 2021-10-26 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium
WO2021233291A1 (en) * 2020-05-20 2021-11-25 维沃移动通信有限公司 Screen capture method and apparatus, and electronic device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070294630A1 (en) * 2006-06-15 2007-12-20 Microsoft Corporation Snipping tool
CN102830963A (en) * 2012-06-28 2012-12-19 北京奇虎科技有限公司 Method and system for matching screenshot
CN104281356A (en) * 2013-07-01 2015-01-14 腾讯科技(深圳)有限公司 Screen sharing method and device
CN104133683A (en) * 2014-07-31 2014-11-05 上海二三四五网络科技股份有限公司 Screenshot obtaining method and device
CN104238913A (en) * 2014-09-02 2014-12-24 北京金山安全软件有限公司 Screenshot method and device and electronic equipment
CN104915202A (en) * 2015-06-05 2015-09-16 广东欧珀移动通信有限公司 Screenshot method and device
US20160378297A1 (en) * 2015-06-25 2016-12-29 Medcpu Ltd. Smart Display Data Capturing Platform For Record Systems
CN106468999A (en) * 2016-09-27 2017-03-01 上海斐讯数据通信技术有限公司 A kind of screenshotss method and system
CN106970754A (en) * 2017-03-28 2017-07-21 北京小米移动软件有限公司 The method and device of screenshotss processing
CN107731249A (en) * 2017-09-15 2018-02-23 维沃移动通信有限公司 A kind of audio file manufacture method and mobile terminal
US20210014431A1 (en) * 2018-10-19 2021-01-14 Beijing Microlive Vision Technology Co., Ltd Method and apparatus for capturing video, electronic device and computer-readable storage medium
WO2021082772A1 (en) * 2019-10-28 2021-05-06 维沃移动通信有限公司 Screenshot method and electronic device
CN110780795A (en) * 2019-10-30 2020-02-11 维沃移动通信有限公司 Screen capturing method and electronic equipment
CN111010610A (en) * 2019-12-18 2020-04-14 维沃移动通信有限公司 Video screenshot method and electronic equipment
CN113552977A (en) * 2020-04-23 2021-10-26 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium
WO2021233291A1 (en) * 2020-05-20 2021-11-25 维沃移动通信有限公司 Screen capture method and apparatus, and electronic device
CN112114733A (en) * 2020-09-23 2020-12-22 青岛海信移动通信技术股份有限公司 Screen capturing and recording method, mobile terminal and computer storage medium
CN112379815A (en) * 2020-12-07 2021-02-19 腾讯科技(深圳)有限公司 Image capturing method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN114546229B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN107341018B (en) Method and device for continuously displaying view after page switching
US11941708B2 (en) Method, apparatus, device and medium for posting a video or image
US11580155B2 (en) Display device for displaying related digital images
WO2017107441A1 (en) Method and device for capturing continuous video pictures
CN111078070B (en) PPT video barrage play control method, device, terminal and medium
CN111314759B (en) Video processing method and device, electronic equipment and storage medium
CN112822541B (en) Video generation method and device, electronic equipment and computer readable medium
CN109766457A (en) A kind of media content search method, apparatus and storage medium
US11941728B2 (en) Previewing method and apparatus for effect application, and device, and storage medium
CN104244101A (en) Method and device for commenting multimedia content
CN111722775A (en) Image processing method, device, equipment and readable storage medium
US20230317117A1 (en) Video generation method and apparatus, device, and storage medium
WO2015078260A1 (en) Method and device for playing video content
CN113573090A (en) Content display method, device and system in game live broadcast and storage medium
CN109388737B (en) Method and device for sending exposure data of content item and storage medium
CN111064930B (en) Split screen display method, display terminal and storage device
CN113709566B (en) Method, device, equipment and computer storage medium for playing multimedia content
KR20160016574A (en) Method and device for providing image
CN111866403B (en) Video graphic content processing method, device, equipment and medium
CN114546229B (en) Information processing method, screen capturing method and electronic equipment
CN112165646B (en) Video sharing method and device based on barrage message and computer equipment
CN111666793A (en) Video processing method, video processing device and electronic equipment
WO2024103633A1 (en) Video playback method and apparatus, electronic device and storage medium
US20240177365A1 (en) Previewing method and apparatus for effect application, and device, and storage medium
CN117395462A (en) Method and device for generating media content, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant