CN113485621A - Image capturing method and device, electronic equipment and storage medium - Google Patents

Image capturing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113485621A
CN113485621A CN202110812054.8A CN202110812054A CN113485621A CN 113485621 A CN113485621 A CN 113485621A CN 202110812054 A CN202110812054 A CN 202110812054A CN 113485621 A CN113485621 A CN 113485621A
Authority
CN
China
Prior art keywords
input
images
control
image
text information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110812054.8A
Other languages
Chinese (zh)
Inventor
陈伟星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110812054.8A priority Critical patent/CN113485621A/en
Publication of CN113485621A publication Critical patent/CN113485621A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Abstract

The application discloses an image capturing method and device, electronic equipment and a storage medium, and belongs to the technical field of interface interaction. The image intercepting method comprises the following steps: receiving N control inputs of a user to a display interface; generating N pieces of control text information corresponding to the N pieces of control input in response to the N pieces of control input; the control text information comprises text information corresponding to at least one of an input mode, an input position and a trigger component; screenshot is conducted on M interfaces corresponding to the N control inputs, and M images are obtained; displaying N pieces of control text information and M pieces of images in an associated manner; wherein N, M is a positive integer.

Description

Image capturing method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of interface interaction, and particularly relates to an image capturing method and device, electronic equipment and a storage medium.
Background
With the popularization of intelligent electronic devices, people increasingly use a screen content capture function on the intelligent electronic device to capture images or videos to record interface contents in a screen.
In the prior art, a screen content capturing mode of an electronic device is mainly a screen capturing process of a current screen display interface manually triggered by preset operation of a user, and the screen capturing mode is mainly used for recording a static screen display interface and cannot intuitively embody a dynamic operation process of the user, so that if the user needs to record a certain dynamic operation process through a screen content capturing function, the recording can be performed only through a screen recording mode.
However, since the captured video contents are highly aggregated in the conventional screen recording manner, for a video with a long duration, if a user needs to view a certain key content in the video, the user needs to manually search the section of content in the video, so that the viewing process of the key content is complicated, and the key content cannot be quickly obtained from the screen recording result.
Disclosure of Invention
The embodiment of the application aims to provide an image capturing method, an image capturing device, electronic equipment and a storage medium, and the method and the device can solve the problems that a dynamic operation process of a user cannot be visually embodied in a traditional screen capturing mode, and key contents cannot be quickly acquired from a screen capturing result when the dynamic operation process of the user is recorded in a traditional screen capturing mode.
In a first aspect, an embodiment of the present application provides an image capturing method, where the method includes:
receiving N control inputs of a user to a display interface;
generating N pieces of manipulation text information corresponding to the N pieces of manipulation input in response to the N pieces of manipulation input; the control text information comprises text information corresponding to at least one of an input mode, an input position and a trigger component;
screenshot is conducted on M interfaces corresponding to the N control inputs, and M images are obtained;
displaying the N pieces of control text information and the M pieces of images in an associated mode;
wherein N, M is a positive integer.
In a second aspect, an embodiment of the present application provides an image capture apparatus, including:
the input receiving module is used for receiving N control inputs of a user to the display interface;
the information generating module is used for responding to the N control inputs and generating N control text information corresponding to the N control inputs; the control text information comprises at least one item of information of an input mode, an input position and a trigger component;
the interface screenshot module is used for screenshot for M interfaces corresponding to the N control inputs to obtain M images;
the associated display module is used for displaying the N pieces of control text information and the M pieces of images in an associated mode;
wherein N, M is a positive integer.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, when the dynamic operation process of the user is recorded, besides the automatic screenshot, a plain text recording mode is provided, that is, N pieces of control text information corresponding to N operation inputs of the user are generated, and the N pieces of control text information and M pieces of images are displayed in an associated manner. In this way, because the control text information records the operation input of the user in a plain text form, the logic between the images can be embodied, and the dynamic operation process of the user can be intuitively embodied when the control text information and the corresponding images are displayed in a correlated manner. In addition, the control text information can describe the key content operated by the user and is matched with the corresponding image for associated display, so that the control text information can be used as a character index to help the user to quickly find the image of the key content when the user views the interception result, and the user can quickly acquire the key content in the interception result.
Drawings
FIG. 1 is one of the flow diagrams of an image capture method shown in accordance with an exemplary embodiment;
FIG. 2a is one of the schematic diagrams of an image capture scene shown in accordance with an exemplary embodiment;
FIG. 2b is a second schematic diagram of an image capture scene shown in accordance with an exemplary embodiment;
FIG. 2c is a third schematic diagram of an image capture scene shown in accordance with an exemplary embodiment;
FIG. 3 is a second flowchart illustrating an image capture method according to an exemplary embodiment;
FIG. 4 is one of the schematic diagrams of an image display scene shown in accordance with an exemplary embodiment;
FIG. 5 is a second schematic diagram of an image display scene shown in accordance with an exemplary embodiment;
FIG. 6a is a third flowchart illustrating an image capture method according to an exemplary embodiment;
FIG. 6b is a schematic diagram illustrating an image annotation display scenario in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating the structure of an image capture device according to an exemplary embodiment;
FIG. 8 is a block diagram illustrating the structure of an electronic device in accordance with an exemplary embodiment;
fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image capturing method, the image capturing apparatus, the electronic device, and the storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The image capturing method provided by the application can be applied to a scene for recording the dynamic change of the display content of the screen interface in the dynamic operation process of a user. In addition, according to the image capture method provided by the embodiment of the application, the execution main body can be an image capture device, or a control module used for executing the image capture method in the image capture device. In the embodiment of the present application, an image capturing method executed by an image capturing apparatus is taken as an example to describe the image capturing method provided in the embodiment of the present application.
FIG. 1 is a flow chart illustrating a method of image interception according to an exemplary embodiment.
As shown in fig. 1, the image capturing method may include the steps of:
step 110, receiving N control inputs of the user to the display interface.
Here, the display interface may be an interface displayed in a screen of an electronic device used by a user, the N manipulation inputs may be one or more inputs performed by the user in the display interface, the N manipulation inputs may be ordered manipulation inputs performed on the same content displayed in the display interface, or ordered manipulation inputs performed on different contents displayed in the display interface. The manipulation input may be, for example, a click input, a slide input, or the like, and is not limited herein. Wherein N is a positive integer.
Illustratively, in response to each of the manipulation inputs, the display interface in the screen may be controlled to make a corresponding change. Changes to the display interface include, but are not limited to, interface switches, layout changes, and the like. For example, when the display interface is a system desktop, clicking an icon of an application program a in the desktop can control the currently displayed system desktop to jump to the interface corresponding to the application program a and display the interface. For another example, the user drags the control displayed above the display interface to the lower side of the display interface for display through sliding input.
In a specific example, a continuous operation recording mode may be entered through a preset operation, and at least one control input of a user to a display interface may be received in the continuous operation recording mode, so that screenshot recording and the like are performed on the continuous operation of the user. The specific mode of entering the operation recording mode may be, for example, that the user slides down simultaneously in the current display interface, the device may trigger a default screen capture function after receiving the three-finger slide-down input of the user, and pop up the prompt information, and may enter the continuous operation recording mode after the user clicks a start button of the continuous operation recording for the prompt information.
And 120, responding to the N control inputs, and generating N control text messages corresponding to the N control inputs.
Here, the manipulation of the text information may include text information corresponding to at least one of an input manner, an input position, and a trigger component. The input mode may include a click or slide input, the input position may include a click area, a slide direction, and the like, the trigger component may include a component for which the user inputs, and the trigger component may be a component displayed in the display interface. Here, the manipulation text information may be information describing the manipulation input in a text form.
Illustratively, a set of operation semantics can be predefined for describing the user's manipulation input in text form. For example, recording that the manipulation input is a click input or a slide input; identifying the screen position where the manipulation input is located or the direction of the input, such as "middle", "upper right", "up", "right", and the like; the characteristics of the component to which the manipulation input is directed, such as "button-finding", "text-applet", are recorded.
In a specific example, for the system desktop 10 shown in fig. 2a, if the user clicks an a icon 11 corresponding to the application a displayed in the middle of the system desktop 10, and then jumps to a main interface 12 corresponding to the application a shown in fig. 2b, the user clicks a b button 13 displayed in the lower right corner of the main interface 12, so that the currently displayed interface jumps from the main interface 12 to a slave interface 14 shown in fig. 2c, and the user clicks a c text 15 displayed above the slave interface 14, and thus, a jump to another interface corresponding to the c text 15 can be triggered. In this way, in the continuous operation recording mode, according to the three times of manipulation inputs, the manipulation text information as shown below can be generated:
click-icon-a in the middle of the screen;
click-button-b-in the lower right-hand corner of the screen;
click-above screen-text-c.
In this way, the manipulation text information corresponding to each manipulation input can be generated.
And step 130, performing screenshot on the M interfaces corresponding to the N control inputs to obtain M images.
Here, since one manipulation input may trigger the interface jump, and certainly, multiple manipulation inputs may be performed based on the same interface, in one case, one manipulation input may correspond to multiple interfaces, and in another case, one interface may also correspond to multiple manipulation inputs, which is not limited herein. Wherein M is a positive integer.
Illustratively, the screenshot may be performed for each manipulation input of the user, and of course, the screenshot may also be performed in a case where the manipulation input causes the interface to change. Wherein, the interface change includes but is not limited to interface jump, interface layout change, etc.
It should be noted that the execution order between step 130 and step 120 may be changed, or may be executed simultaneously, and this is not limited here.
In an alternative embodiment, step 130 may specifically include:
determining whether the display interface is switched at least once in the process of receiving N control inputs;
and under the condition that the display interface is switched for at least one time, respectively carrying out screenshot on the interface displayed before switching and the interface displayed after switching to obtain M images.
Here, the screenshot process may be triggered each time the interface is switched, and the interface displayed before switching and the interface displayed after switching may be respectively intercepted.
In a specific example, when the user clicks or slides to cause the interface to jump or scroll, the background monitors the change, automatically intercepts the cached interface content of the previous interface, and intercepts the interface content displayed after the interface jumps and changes, so as to obtain two images corresponding to the operation.
Therefore, the key image for recording the interface display content change process is obtained by screenshot when the control input trigger interface is switched, so that redundant screenshot can be reduced, and a user can conveniently and quickly obtain the key content in the interface change process.
In addition, after the user finishes capturing the screen content, the user can click a button for finishing continuous screenshot or perform a preset operation gesture, and then the user can exit the continuous screenshot recording mode.
And 140, displaying the N pieces of control text information and the M pieces of images in a correlated manner.
In the embodiment of the application, after the N pieces of control text information and the M pieces of images corresponding to the N pieces of control input are acquired, the related information corresponding to the N pieces of control input can be stored, and the user can be supported to check the related information.
For example, when the related information is stored, the N pieces of manipulation text information and the image sequence formed by the M pieces of images may be stored in association, so that when viewed by a user, the corresponding images may be viewed according to the manipulation text information. In addition, when the user views the intercepted image, the corresponding control text information can be displayed at the same time, and the user can also view the image corresponding to the information by clicking the corresponding control text information.
In addition, after the screenshot process is completed, the user can send the intercepted M images and the N control text messages to other users. Specifically, when the content of the continuous screenshot sequence is sent, if the user clicks one of the images to send, the user can be prompted to have an associated screenshot, and the user can select to send the associated screenshot together, so that the image sequence where the image is located and the corresponding control text information are packaged into an image set to be sent, and the N control text information and the M images are displayed in an associated manner on other devices.
Of course, the sender may also choose to send the N manipulated text messages directly to reduce the volume of the sent content, which is not limited herein.
Therefore, when recording the dynamic operation process of the user, the embodiment of the application provides a plain text recording mode besides performing automatic screenshot, that is, N pieces of control text information corresponding to N operation inputs of the user are generated, and the N pieces of control text information and M images are displayed in a correlated manner. In this way, because the control text information records the operation input of the user in a plain text form, the logic between the images can be embodied, and the dynamic operation process of the user can be intuitively embodied when the control text information and the corresponding images are displayed in a correlated manner. In addition, the control text information can describe the key content operated by the user and is matched with the corresponding image for associated display, so that the control text information can be used as a character index to help the user to quickly find the image of the key content when the user views the interception result, and the user can quickly acquire the key content in the interception result.
Based on this, in a possible embodiment, as shown in fig. 3, the step 140 may specifically include: the steps 1401-1404 are as follows:
and 1401, displaying N pieces of control text information.
Here, the N pieces of manipulation text information may be displayed in a form of hyperlink text, so that when a user clicks one of the pieces of manipulation text information, the user may jump to one or more images associated with the piece of manipulation text information through the hyperlink to display the one or more images.
In a specific example, as shown in fig. 4, when three pieces of manipulated text information and their corresponding image sequences are displayed in association with each other, hyperlink text of the three pieces of manipulated text information may be displayed above the image display area 41, where the text content is the manipulated text information.
Step 1402, a first input of a first manipulated text message of the N manipulated text messages by a user is received.
The first control text information is corresponding to a first control input of the N control inputs. In addition, the first input may be a selection input to the first manipulation text information, for example, clicking the first manipulation text information.
In step 1403, in response to the first input, T first images corresponding to the first manipulation text information are determined from the M images.
Wherein T is a positive integer and is less than or equal to M.
In one particular example, as shown in FIG. 4, a user may click on the manipulated text information 42 with "click-button-b" in the lower right corner of the screen, such that one or more images associated with the manipulated text information 42 may be determined from the corresponding sequence of images for display.
In an alternative implementation manner, after step 130, the image capturing method provided in the embodiment of the present application may further include:
determining index values corresponding to the M images according to the interception sequence of the M images;
storing the index value and the N pieces of control text information in an associated manner;
step 1403 specifically includes:
acquiring T target index values stored in association with the first control text information;
t first images corresponding to T target index values are determined from the M images.
Here, the index values may be created for the M images, and the association relationship between the images and the manipulated text information may be established using the index values, so that when one or more first images corresponding to the first manipulated text information are displayed, the one or more first images may be determined according to the one or more index values associated with the first manipulated text information.
Therefore, by creating the index value for each image, the incidence relation between the control text information and the image can be conveniently established, the corresponding image can be conveniently searched according to the control text information, the function of automatically positioning the image is realized, and the method is also favorable for quickly acquiring the image of the key content according to the control text information.
Step 1404, display the T first images.
In the embodiment of the application, if T is greater than or equal to 2, the T first images may be arranged and then displayed according to the sequence when the T first images are captured. The display manner includes, but is not limited to, automatically displaying the T first images in sequence, displaying the T first images in a loop, or triggering an image switching process by a user, and the like, which is not limited herein.
For the specific display mode of the T first images, the embodiment of the present application may provide two modes, one is a split mode, that is, only one image is displayed at a time; the other mode is an aggregation mode, namely M images are spliced and then switched to the area corresponding to the T first images in a sliding mode to be displayed.
For the foregoing splitting mode, in an optional implementation manner, when the first control input is an input for controlling the display interface to be switched from the first interface to the second interface, step 1404 may specifically include:
displaying any one of the T first images;
receiving a second input of a user to a target area in a first target image under the condition that the first target image in the T first images is displayed; the first target image is an image intercepted corresponding to the first interface, and the target area is an area corresponding to the input position of the first control input in the first target image;
and under the condition that the input mode of the second input is matched with the input mode of the first control input, responding to the second input, and switching to display a second target image, wherein the second target image is an image intercepted corresponding to the second interface.
In the splitting mode, if a user views any one of the M images, the user can be prompted as an aggregated image, and an index value of the aggregated image is labeled, and for a control input causing interface switching, after the user inputs the same area in the same interface content image in the same input mode, the user can automatically jump and display the image intercepted after interface switching.
In a specific example, as shown in fig. 4, after the user clicks the manipulation text information 42 whose content is "click-button-b" at the lower right corner of the screen ", a first image in the one or more images corresponding to the manipulation text information 42 may be displayed in the image display area 41, and if the manipulation input corresponding to the manipulation text information 42 is an input for controlling the display interface to be switched from the first interface to the second interface, in a case where the currently displayed image is an image captured corresponding to the first interface, if the user clicks the area 43 corresponding to the button b at the lower right corner of the image, the user may trigger switching to the captured image corresponding to the second interface for display.
Therefore, the T images are displayed in the splitting mode, an intuitive display mode can be provided for carrying out associated display on each control text message and the corresponding image, the user can conveniently and quickly position the image where the key content is located, and the user experience is improved.
In an optional implementation manner, in a case that the first manipulation input is an input for controlling the display interface to be switched from the first interface to the second interface, after step 140, the image capturing method provided by the embodiment of the present application may further include:
splicing the M images according to the screenshot sequence of the M images to obtain spliced images;
step 1404 may specifically include:
displaying the content of a region corresponding to any one of the T first images in the spliced image;
receiving a third input of a user to a target sub-area in the first area under the condition that the content of the first area corresponding to the first target image is displayed; the target sub-region is a sub-region corresponding to the input position of the first control input in the first region;
and in the case that the input mode of the third input matches the input mode of the first manipulation input, displaying the content of the second area corresponding to the second target image in response to the third input.
For example, in the aggregation mode, the M images may be stitched into one image, that is, a stitched image, according to the index value corresponding to each image in the M images. The splicing manner includes, but is not limited to, transverse splicing or longitudinal splicing, and is not limited herein. For the manipulation input causing the interface switching, after the user inputs the same sub-area in the same interface content area in the same input mode, the user can automatically slide to the area corresponding to the image intercepted after the interface switching for displaying, that is, when receiving a third input of the user to a target sub-area in the first area, the user can slide from the current first area to the second area for displaying the content of the second area.
In a specific example, as shown in fig. 5, after the user clicks the manipulation text information 42 with the content "click-button-b" at the lower right corner of the screen ", a region corresponding to a first ordered image in one or more images corresponding to the manipulation text information 42 may be displayed in the image display region 41, and if the manipulation input corresponding to the manipulation text information 42 is an input for controlling the display interface to be switched from the first interface to the second interface, in a case where the currently displayed region is an image intercepted corresponding to the first interface, that is, a first target image, and belongs to the region 50, if the user clicks the sub-region 51 corresponding to the button b at the lower right corner in the region 50, the user may trigger sliding to the intercepted image corresponding to the second interface, that is, the region to which the second target image belongs, so as to display the intercepted image, that is the second target image.
Therefore, the T images are displayed in the aggregation mode, an intuitive display mode can be provided for performing associated display on each control text message and the corresponding image thereof, the user can conveniently and quickly position the image where the key content is located, and the user experience is improved.
Therefore, in the embodiment of the application, a user can quickly determine one or more images corresponding to the control text information and display the images by selecting the control text information corresponding to the required control input from the displayed N control text information, so that the user can quickly acquire the required images from the intercepted result.
In addition, in a possible embodiment, in a case that a second image of the M images is displayed, and the screenshot interface corresponding to the second image is an interface displayed when P second manipulation inputs are received, as shown in fig. 6a, step 140 may further include: step 1405-1406, as follows:
step 1405, generating P label information corresponding to the P second manipulation inputs according to the P second manipulation text information corresponding to the P second manipulation inputs, respectively.
Here, after obtaining the M images through screenshot, information corresponding to the corresponding control input may be further tagged in the image, and specifically, corresponding tagging information may be generated according to control text information corresponding to the image. The annotation information may include at least one of sorting information, input trajectory information, and input mode information, and the sorting information may be determined according to a receiving order corresponding to the P second control inputs. In the case where the input manner is a slide input, the input trajectory information may be a slide trajectory when the user performs a manipulation input, and in the case where the input manner is a click input, the input trajectory information may be a location area clicked when the user performs the manipulation input.
In step 1406, P pieces of annotation information are displayed in the second image.
Wherein P is a positive integer and is more than or equal to 2.
Here, the display mode of the annotation information includes, but is not limited to, display in a region-specific manner, display in a character form, and the like, and is not limited herein.
In a specific example, as shown in fig. 6b, when the content of the image 61 is the content of the interface displayed when two manipulation inputs are received, two pieces of annotation information corresponding to the two manipulation inputs, respectively, may be displayed in the image 61, for example, for a first received slide input for dragging an icon, the annotation information 62 may be generated and displayed in the image 61 according to the corresponding manipulation text information "drag-screen middle area-icon-d", and for a later received click input, the annotation information 63 may be generated and displayed in the image 61 according to the corresponding manipulation text information "click-button-jump" below the screen ".
It should be noted that, when the N manipulation inputs and the M images are stored in association with each other, the annotation information may be stored together with the N manipulation inputs, and when the M images are transmitted to a device used by another user, the annotation information may be transmitted together, which is not limited herein.
Therefore, by generating and displaying the annotation information of the control input corresponding to the image in the image, the dynamic operation process of the user can be displayed more intuitively, particularly the screenshot display of the interface with multiple control inputs, and the control input process of the user can be displayed dynamically in a static image mode.
Based on the same inventive concept, the application also provides an image capturing device. The following describes in detail an image capture apparatus provided in an embodiment of the present application with reference to fig. 7.
Fig. 7 is a block diagram illustrating a configuration of an image capture apparatus according to an exemplary embodiment.
As shown in fig. 7, the image intercepting apparatus 700 may include:
an input receiving module 701, configured to receive N control inputs to a display interface from a user;
an information generating module 702, configured to generate, in response to the N manipulation inputs, N pieces of manipulation text information corresponding to the N manipulation inputs; the control text information comprises text information corresponding to at least one of an input mode, an input position and a trigger component;
an interface screenshot module 703, configured to screenshot M interfaces corresponding to the N control inputs to obtain M images;
an association display module 704, configured to associate and display the N pieces of manipulation text information and the M pieces of images;
wherein N, M is a positive integer.
The following describes the image capture device 700 in detail, specifically as follows:
in one embodiment, the interface screenshot module 703 may specifically include:
the judgment submodule is used for determining whether the interface is subjected to at least one interface switching in the process of receiving the N control inputs;
and the screenshot submodule is used for respectively screenshot the interface displayed before switching and the interface displayed after switching under the condition that the display interface is switched for at least one time to obtain M images.
In one embodiment, the association display module 704 may specifically include:
the first display submodule is used for displaying the N pieces of control text information;
the receiving submodule is used for receiving first input of a user to first control text information in the N control text information; the first control text information is control text information corresponding to a first control input in the N control inputs;
a determination sub-module for determining, in response to a first input, T first images corresponding to the first manipulation text information from among the M images;
the second display submodule is used for displaying the T first images;
wherein T is a positive integer and is less than or equal to M.
In one embodiment, when the first control input is an input for controlling the display interface to be switched from the first interface to the second interface, the second display sub-module may specifically include:
a first display unit for displaying any one of the T first images;
a first receiving unit, configured to receive a second input of a user to a target area in a first target image in a case where the first target image in the T first images is displayed; the first target image is an image intercepted corresponding to the first interface, and the target area is an area corresponding to the input position of the first control input in the first target image;
and the image switching unit is used for responding to the second input and switching to display a second target image under the condition that the input mode of the second input is matched with the input mode of the first control input, wherein the second target image is an image which is intercepted corresponding to the second interface.
In one embodiment, in a case that the first manipulation input is an input for controlling the display interface to be switched from the first interface to the second interface, the image capturing apparatus 700 may further include:
the image splicing module is used for splicing the M images according to the screenshot sequence of the M images to obtain spliced images after the M interfaces corresponding to the N control inputs are subjected to screenshot to obtain the M images;
the second display sub-module may specifically include:
the second display unit is used for displaying the content of the area corresponding to any one of the T first images in the spliced image;
a second receiving unit, configured to receive a third input by the user to a target sub-area in the first area in a case where the content of the first area corresponding to the first target image is displayed; the target sub-region is a sub-region corresponding to the input position of the first control input in the first region;
and the area sliding unit is used for responding to the third input and displaying the content of the second area corresponding to the second target image under the condition that the input mode of the third input is matched with the input mode of the first control input.
In one embodiment, when a second image of the M images is displayed and a screenshot interface corresponding to the second image is an interface displayed when P second control inputs are received, the associating and displaying module 704 may further include:
the generation submodule is used for generating P marking information respectively corresponding to the P second control inputs according to P second control text information respectively corresponding to the P second control inputs; the annotation information comprises at least one item of sequencing information, input track information and input mode information, and the sequencing information is determined according to a receiving sequence corresponding to the P second control inputs;
the display submodule is used for displaying the P pieces of annotation information in the second image;
wherein P is a positive integer and is more than or equal to 2.
In one embodiment, the image capturing apparatus 700 may further include:
the index value determining module is used for determining the index values corresponding to the M images according to the intercepting sequence of the M images after the M interfaces corresponding to the N control inputs are subjected to screenshot to obtain the M images;
the associated storage module is used for storing the index value and the N pieces of control text information in an associated manner;
the determining sub-module may specifically include:
the index value acquisition unit is used for acquiring T target index values stored in association with the first control text information;
an image determining unit for determining T first images corresponding to the T target index values from the M images.
Therefore, when recording the dynamic operation process of the user, in addition to performing automatic screenshot, the embodiment of the application provides a plain text recording mode, that is, N pieces of control text information corresponding to N operation inputs of the user are generated, and the N pieces of control text information and M pieces of images are displayed in a correlated manner. In this way, because the control text information records the operation input of the user in a plain text form, the logic between the images can be embodied, and the dynamic operation process of the user can be intuitively embodied when the control text information and the corresponding images are displayed in a correlated manner. In addition, the control text information can describe the key content operated by the user and is matched with the corresponding image for associated display, so that the control text information can be used as a character index to help the user to quickly find the image of the key content when the user views the interception result, and the user can quickly acquire the key content in the interception result.
The image capture device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image capture device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image capture device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 6b, and is not described here again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and executable on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the foregoing image capturing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 907 is configured to receive N control inputs to the display interface from a user;
a processor 910, configured to generate, in response to the N manipulation inputs, N pieces of manipulation text information corresponding to the N manipulation inputs; the control text information comprises text information corresponding to at least one of an input mode, an input position and a trigger component; screenshot is conducted on M interfaces corresponding to the N control inputs, and M images are obtained;
a display unit 906 configured to display the N pieces of manipulation text information and the M pieces of images in association;
wherein N, M is a positive integer.
Therefore, when recording the dynamic operation process of the user, in addition to performing automatic screenshot, the embodiment of the application provides a plain text recording mode, that is, N pieces of control text information corresponding to N operation inputs of the user are generated, and the N pieces of control text information and M pieces of images are displayed in a correlated manner. In this way, because the control text information records the operation input of the user in a plain text form, the logic between the images can be embodied, and the dynamic operation process of the user can be intuitively embodied when the control text information and the corresponding images are displayed in a correlated manner. In addition, the control text information can describe the key content operated by the user and is matched with the corresponding image for associated display, so that the control text information can be used as a character index to help the user to quickly find the image of the key content when the user views the interception result, and the user can quickly acquire the key content in the interception result.
Optionally, the processor 910 is configured to determine whether the display interface performs at least one interface switching in the process of receiving the N manipulation inputs; and under the condition that the display interface is switched for at least one time, respectively carrying out screenshot on the interface displayed before switching and the interface displayed after switching to obtain the M images.
Optionally, a display unit 906, configured to display the N pieces of manipulated text information;
a user input unit 907, configured to receive a first input of a first manipulation text message of the N manipulation text messages from a user; the first control text information is control text information corresponding to a first control input in the N control inputs;
a processor 910, configured to determine, in response to the first input, T first images corresponding to the first manipulated text information from the M images;
a display unit 906 further configured to display the T first images;
wherein T is a positive integer and is less than or equal to M.
Optionally, a display unit 906, configured to display any one of the T first images;
a user input unit 907 for receiving a second input of a user to a target area in a first target image in a case where the first target image in the T first images is displayed; the first target image is an image intercepted corresponding to the first interface, and the target area is an area corresponding to an input position of the first control input in the first target image;
the display unit 906 is further configured to switch to display a second target image in response to the second input when the input mode of the second input matches the input mode of the first manipulation input, where the second target image is an image captured corresponding to the second interface.
Optionally, the processor 910 is configured to splice the M images according to a screenshot sequence of the M images to obtain a spliced image;
a display unit 906 configured to display content of a region corresponding to any one of the T first images in the stitched image;
a user input unit 907 for receiving a third input of the user to a target sub-area in the first area in a case where the content of the first area corresponding to the first target image is displayed; the target sub-area is a sub-area corresponding to an input position of the first control input in the first area;
the display unit 906 is further configured to display content of a second area corresponding to a second target image in response to the third input if the input manner of the third input matches the input manner of the first manipulation input.
Optionally, the processor 910 is configured to generate, according to P second manipulation text messages respectively corresponding to the P second manipulation inputs, P pieces of label information respectively corresponding to the P second manipulation inputs; the annotation information comprises at least one item of sequencing information, input track information and input mode information, and the sequencing information is determined according to the receiving sequence corresponding to the P second control inputs;
a display unit 906 configured to display the P pieces of annotation information in the second image;
wherein P is a positive integer and is more than or equal to 2.
Optionally, the processor 910 is configured to determine, according to the clipping order of the M images, index values corresponding to the M images respectively;
a memory 909 for storing the index value and the N pieces of manipulation text information in association;
the processor 910 is further configured to obtain T target index values stored in association with the first manipulated text information; determining the T first images corresponding to the T target index values from the M images.
Therefore, the user can quickly determine one or more images corresponding to the control text information and display the images by selecting the control text information corresponding to the required control input from the displayed N control text information, so that the user can quickly acquire the required images from the interception result. In addition, by generating and displaying the annotation information of the control input corresponding to the image in the image, the dynamic operation process of the user can be more intuitively displayed, particularly the screenshot display of the interface with multiple control inputs, and the control input process of the user can be dynamically displayed in a static image mode.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics Processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071 also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 909 can be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 910 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned image capturing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned image capture method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image capture method, comprising:
receiving N control inputs of a user to a display interface;
generating N pieces of manipulation text information corresponding to the N pieces of manipulation input in response to the N pieces of manipulation input; the control text information comprises text information corresponding to at least one of an input mode, an input position and a trigger component;
screenshot is conducted on M interfaces corresponding to the N control inputs, and M images are obtained;
displaying the N pieces of control text information and the M pieces of images in an associated mode;
wherein N, M is a positive integer.
2. The method of claim 1, wherein the screenshot of the M interfaces corresponding to the N manipulation inputs resulting in M images comprises:
determining whether the display interface is switched at least once in the process of receiving the N control inputs;
and under the condition that the display interface is switched for at least one time, respectively carrying out screenshot on the interface displayed before switching and the interface displayed after switching to obtain the M images.
3. The method of claim 1, wherein the associating displays the N manipulated text messages and the M images, comprising:
displaying the N pieces of control text information;
receiving a first input of a user to a first control text message in the N control text messages; the first control text information is control text information corresponding to a first control input in the N control inputs;
determining, in response to the first input, T first images corresponding to the first manipulation text information from the M images;
displaying the T first images;
wherein T is a positive integer and is less than or equal to M.
4. The method of claim 3, wherein in the event that the first manipulation input is an input that controls the display interface to switch from a first interface to a second interface, the displaying the T first images comprises:
displaying any one of the T first images;
receiving a second input of a user to a target area in a first target image in the T first images under the condition that the first target image is displayed; the first target image is an image intercepted corresponding to the first interface, and the target area is an area corresponding to an input position of the first control input in the first target image;
and under the condition that the input mode of the second input is matched with the input mode of the first control input, responding to the second input, and switching to display a second target image, wherein the second target image is an image intercepted corresponding to the second interface.
5. The method of claim 3, wherein in a case that the first manipulation input is an input for controlling the display interface to be switched from a first interface to a second interface, after capturing the M interfaces corresponding to the N manipulation inputs to obtain M images, the method further comprises:
splicing the M images according to the screenshot sequence of the M images to obtain a spliced image;
the displaying the T first images includes:
displaying the content of a region corresponding to any one of the T first images in the spliced image;
receiving a third input of a user to a target sub-area in a first area under the condition that the content of the first area corresponding to a first target image is displayed; the target sub-area is a sub-area corresponding to an input position of the first control input in the first area;
and under the condition that the input mode of the third input is matched with the input mode of the first control input, responding to the third input, and displaying the content of a second area corresponding to a second target image.
6. The method of claim 1, wherein the associating displays the N maneuver text information and the M images if a second image of the M images is displayed and a screenshot interface corresponding to the second image is the interface displayed when receiving P second maneuver inputs, further comprising:
generating P marking information respectively corresponding to the P second control inputs according to P second control text information respectively corresponding to the P second control inputs; the annotation information comprises at least one item of sequencing information, input track information and input mode information, and the sequencing information is determined according to the receiving sequence corresponding to the P second control inputs;
displaying the P pieces of annotation information in the second image;
wherein P is a positive integer and is more than or equal to 2.
7. The method of claim 3, wherein after screenshot the M interfaces corresponding to the N manipulation inputs, resulting in M images, the method further comprises:
determining index values corresponding to the M images respectively according to the interception sequence of the M images;
storing the index value and the N pieces of control text information in an associated manner;
the determining, from the M images, T first images corresponding to the first manipulated text information includes:
acquiring T target index values stored in association with the first control text information;
determining the T first images corresponding to the T target index values from the M images.
8. An image capture device, comprising:
the input receiving module is used for receiving N control inputs of a user to the display interface;
the information generating module is used for responding to the N control inputs and generating N control text information corresponding to the N control inputs; the control text information comprises at least one item of information of an input mode, an input position and a trigger component;
the interface screenshot module is used for screenshot for M interfaces corresponding to the N control inputs to obtain M images;
the associated display module is used for displaying the N pieces of control text information and the M pieces of images in an associated mode;
wherein N, M is a positive integer.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the image capture method of any of claims 1-7.
10. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image interception method according to any one of claims 1 to 7.
CN202110812054.8A 2021-07-19 2021-07-19 Image capturing method and device, electronic equipment and storage medium Pending CN113485621A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110812054.8A CN113485621A (en) 2021-07-19 2021-07-19 Image capturing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110812054.8A CN113485621A (en) 2021-07-19 2021-07-19 Image capturing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113485621A true CN113485621A (en) 2021-10-08

Family

ID=77942205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110812054.8A Pending CN113485621A (en) 2021-07-19 2021-07-19 Image capturing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113485621A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110138313A1 (en) * 2009-12-03 2011-06-09 Kevin Decker Visually rich tab representation in user interface
CN109857674A (en) * 2019-02-27 2019-06-07 上海优扬新媒信息技术有限公司 A kind of recording and playback test method and relevant apparatus
CN110222212A (en) * 2019-04-25 2019-09-10 南京维沃软件技术有限公司 A kind of display control method and terminal device
CN112099706A (en) * 2020-09-04 2020-12-18 深圳市欢太科技有限公司 Page display method and device, electronic equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110138313A1 (en) * 2009-12-03 2011-06-09 Kevin Decker Visually rich tab representation in user interface
CN109857674A (en) * 2019-02-27 2019-06-07 上海优扬新媒信息技术有限公司 A kind of recording and playback test method and relevant apparatus
CN110222212A (en) * 2019-04-25 2019-09-10 南京维沃软件技术有限公司 A kind of display control method and terminal device
CN112099706A (en) * 2020-09-04 2020-12-18 深圳市欢太科技有限公司 Page display method and device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN112988006B (en) Display method, display device, electronic equipment and storage medium
CN111857460A (en) Split screen processing method, split screen processing device, electronic equipment and readable storage medium
CN112099684A (en) Search display method and device and electronic equipment
CN112433693B (en) Split screen display method and device and electronic equipment
CN113467660A (en) Information sharing method and electronic equipment
CN113360062A (en) Display control method and device, electronic equipment and readable storage medium
CN113805996A (en) Information display method and device
CN113485599A (en) Display control method, display control device, electronic device, and medium
CN112911401A (en) Video playing method and device
CN113179205A (en) Image sharing method and device and electronic equipment
CN115658197A (en) Interface switching method and interface switching device
CN113268182B (en) Application icon management method and electronic device
CN112399010B (en) Page display method and device and electronic equipment
CN114116098A (en) Application icon management method and device, electronic equipment and storage medium
CN114374663B (en) Message processing method and message processing device
CN115543176A (en) Information processing method and device and electronic equipment
CN113485621A (en) Image capturing method and device, electronic equipment and storage medium
CN113784192A (en) Screen projection method, screen projection device and electronic equipment
CN113778279A (en) Screenshot method and device and electronic equipment
CN112765500A (en) Information searching method and device
CN112818094A (en) Chat content processing method and device and electronic equipment
CN112948844A (en) Control method and device and electronic equipment
CN113037618B (en) Image sharing method and device
CN113726953B (en) Display content acquisition method and device
CN117032537A (en) Image processing method, processing device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination