CN117331474A - Screenshot generation method and device, electronic equipment and storage medium - Google Patents

Screenshot generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117331474A
CN117331474A CN202311308386.8A CN202311308386A CN117331474A CN 117331474 A CN117331474 A CN 117331474A CN 202311308386 A CN202311308386 A CN 202311308386A CN 117331474 A CN117331474 A CN 117331474A
Authority
CN
China
Prior art keywords
screenshot
input
target
video
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311308386.8A
Other languages
Chinese (zh)
Inventor
蒙伟雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202311308386.8A priority Critical patent/CN117331474A/en
Publication of CN117331474A publication Critical patent/CN117331474A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a screenshot generation method, a screenshot generation device, electronic equipment and a storage medium, and belongs to the technical field of computers. The screenshot generating method comprises the following steps: receiving a first input of a user under the condition of screen recording; responding to the first input, acquiring a screen-recorded video and N screen shots corresponding to N second inputs, wherein N is an integer greater than 1; the N second inputs are inputs received in the screen recording process, and the N screenshot are video frames corresponding to the N second inputs in the screen recording video.

Description

Screenshot generation method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of computers, and particularly relates to a screenshot generating method, a screenshot generating device, electronic equipment and a storage medium.
Background
As the functions of electronic devices become more and more complete, the functional demands of users on the electronic devices become higher and higher.
Currently, when a user wants to obtain multiple screenshots, multiple screenshot operations are needed, and each operation obtains one screenshot. Thus, the operation is more complicated and the efficiency is lower.
Disclosure of Invention
The embodiment of the application aims to provide a screenshot generating method, a screenshot generating device, electronic equipment and a storage medium, a plurality of screenshots can be obtained through single input, the operation is simple, and the efficiency is high.
In a first aspect, an embodiment of the present application provides a screenshot generating method, where the method includes:
receiving a first input of a user under the condition of screen recording;
responding to the first input, acquiring a screen-recorded video and N screen shots corresponding to N second inputs, wherein N is an integer greater than 1;
the N second inputs are inputs received in the screen recording process, and the N screenshot are video frames corresponding to the N second inputs in the screen recording video.
In a second aspect, an embodiment of the present application provides a screenshot generating apparatus, including:
the first receiving module is used for receiving a first input of a user under the condition of screen recording;
the first acquisition module is used for responding to the first input, acquiring the screen-recorded video and N screenshots corresponding to N second inputs, wherein N is an integer larger than 1;
the N second inputs are inputs received in the screen recording process, and the N screenshot are video frames corresponding to the N second inputs in the screen recording video.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, under the condition of screen recording, a screen recording video and N screenshots corresponding to N second inputs received in the screen recording process can be received and responded to a first input of a user, wherein the N screenshots are video frames corresponding to the N second inputs in the screen recording video. Therefore, a plurality of screenshots can be obtained through single input, the operation is simple, and the efficiency is high.
Drawings
FIG. 1 is one of the flow diagrams of a screenshot generation method shown in accordance with an exemplary embodiment;
FIG. 2 is one of the scene schematics of one screenshot generation method shown in accordance with an exemplary embodiment;
FIG. 3 is a second schematic view of a scenario illustrating a screenshot generation method according to an example embodiment;
FIG. 4 is a third schematic view of a scenario illustrating a screenshot generation method according to an example embodiment;
FIG. 5 is a diagram illustrating a scenario of a screenshot generation method according to an example embodiment;
FIG. 6 is a diagram illustrating a scenario of a screenshot generation method according to an exemplary embodiment;
FIG. 7 is a diagram illustrating a scenario sixth of a screenshot generation method according to an example embodiment;
FIG. 8 is a diagram of a scene illustrating a screenshot generation method according to an exemplary embodiment;
FIG. 9 is a diagram illustrating an eighth scenario of a screenshot generation method according to an example embodiment;
FIG. 10 is a diagram illustrating a scene of a screenshot generation method according to an exemplary embodiment;
FIG. 11 is a schematic diagram of a scenario illustrating a screenshot generation method according to an example embodiment;
FIG. 12 is an eleven of a scene diagram illustrating a screenshot generation method according to an example embodiment;
FIG. 13 is a diagram of a scene illustrating a screenshot generation method according to an exemplary embodiment;
FIG. 14 is a thirteen of a scene diagram illustrating a screenshot generation method according to an example embodiment;
FIG. 15 is a second flow diagram illustrating a screenshot generation method according to an exemplary embodiment;
FIG. 16 is a block diagram illustrating a configuration of a screenshot generating device according to an example embodiment;
FIG. 17 is a block diagram of an electronic device, according to an example embodiment;
fig. 18 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
Technical solutions in the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
As background technology, with the development of electronic devices and internet applications, the frequency of use of screenshot functions by users is increasing, and the screenshot mainly includes generating an immediate screen image for a specific screen displayed on a screen of an electronic device (e.g., a smart phone, a tablet computer, etc.).
In order to solve the rapid screenshot requirement of a user in the process of using the electronic equipment, various convenient operation modes for triggering the screenshot are designed by each electronic equipment manufacturer. Currently, the operation of the screenshot can be triggered in various manners, such as the operation of clicking the screenshot function control, the gesture operation of sliding down three fingers, the combined operation of pressing the power key and the volume key simultaneously, and the like.
Although the operations that can trigger the screenshot in the prior art are various, each time the user operates, one screenshot is obtained. When the user needs a plurality of screenshots, the user needs to operate for a plurality of times, the steps are tedious and repeated, and the efficiency is low.
The embodiment of the application provides a screenshot generating method, a screenshot generating device, electronic equipment and a storage medium, which can receive and respond to a first input of a user under the condition of screen recording, and acquire N screenshots of a screen recording video and N second inputs received in the screen recording process, wherein the N screenshots are video frames respectively corresponding to the N second inputs in the screen recording video. Therefore, a plurality of screenshots can be obtained through single input, the operation is simple, and the efficiency is high.
The screenshot generating method, device, electronic equipment and storage medium provided by the embodiment of the application are described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
The screenshot generating method provided by the embodiment of the application can be applied to an application scene where a user wants to obtain a plurality of screenshots.
The screenshot generating method provided in the embodiment of the present application is described in detail below with reference to fig. 1 to 15. In the screenshot generating method provided in the embodiment of the present application, the execution subject may be the second electronic device, and it should be noted that the execution subject does not constitute a limitation of the present application.
FIG. 1 is a flowchart illustrating a screenshot generation method according to an example embodiment.
As shown in fig. 1, the screenshot generating method may include the steps of:
step 110, in the case of a screen recording, a first input of a user is received.
Here, the first input may be an input triggering the second electronic device to end the recording. For example, the first input may be a single click of an "end screen" control. The second electronic device may be a cell phone.
In step 120, in response to the first input, a video recording and N screenshots corresponding to N second inputs are acquired.
Here, N may be an integer greater than 1. The N second inputs may be inputs received during a recording process, and the N shots may be video frames in the recording video corresponding to the N second inputs, respectively. The plurality of user inputs may not all be screenshot inputs.
Each of the second inputs may correspond to a video frame. The video frame corresponding to the second input may be a video frame corresponding to a previous time input by the user, or a video frame corresponding to a next time input by the user, or a video frame corresponding to a current time input by the user.
Therefore, under the condition of screen recording, the screen recording video and N screenshots corresponding to N second inputs received in the screen recording process can be received and responded to the first input of the user, and the N screenshots are video frames corresponding to the N second inputs in the screen recording video. Therefore, a plurality of screenshots can be obtained through single input, the operation is simple, and the efficiency is high.
In an alternative embodiment, prior to step 110, the method may further comprise:
receiving a fourth input from the user;
in response to the fourth input, a screen recording is initiated.
Here, the fourth input may be an input triggering the second electronic device to start the screen recording. The exemplary fourth input may be a single click of a "start screen" control.
In addition, in addition to recording the video, a second input may be recorded during the recording process, and voice information may be recorded via the microphone.
In some examples, the second electronic device may be a cell phone. The user wants to introduce the operation flow of the mobile phone through a plurality of screen shots, the user can click a screen-recording start control, the mobile phone starts screen-recording, as shown in fig. 2, in the screen-recording process, the user can perform N second inputs, demonstrate the operation flow, then the user clicks a screen-recording end control 210, and the mobile phone can automatically acquire the screen-recording video and N screen shots corresponding to the N second inputs.
In an alternative embodiment, step 120 may include:
determining N second inputs meeting preset conditions from the plurality of second inputs;
and selecting video frames corresponding to the N second inputs from the video recording video to obtain N screenshots.
Here, the preset condition may include an input for a preset control and/or an input for a preset area. The preset controls and the preset areas can be set according to actual requirements, and are not limited herein.
That is, in one possible implementation, each of the second inputs selects a corresponding video frame as a screenshot, in another possible implementation, a video frame corresponding to a second input satisfying a preset condition is selected as a screenshot from the plurality of second inputs, and the preset condition may be that the input position is within a preset input type, the input position is within a preset position range, or the input time is greater than a target threshold, and the preset condition is not limited in this implementation.
Illustratively, the preset control may be an love control. As shown in fig. 3, in the screen recording process, the user browses the photo, clicks the love control 310, that is, performs the second input in the browsing process, then as shown in fig. 4, the color of the love control 310 changes, and the video frame after clicking the love control 310 and the color of the love control 310 changes can be selected as the screenshot.
Therefore, only the video frames corresponding to the second inputs which partially meet the preset conditions can be selected as the screenshot according to the user requirements, so that the interference of other second inputs can be eliminated, and the obtained screenshot meets the user requirements.
In an alternative embodiment, after step 120, the method may further include:
and classifying and displaying the N screenshots according to the input types of the second inputs respectively corresponding to the N screenshots.
Here, the second electronic device may receive a fifth input of the user; and responding to the fifth input, and classifying and displaying the N screenshots according to the input types of the second input corresponding to the N screenshots respectively.
Wherein the fifth input may be an input into the screenshot recording page. For example, after the user clicks the "end recording" control, a hover window of the recorded video may be displayed and the fifth input may be double clicking the hover window.
For example, the input types of the second input may include single-finger single click, single-finger double click, single-finger long press, single-finger left slide, single-finger right slide, single-finger up slide, single-finger down slide, double-finger single click, double-finger double click, double-finger long press, double-finger left slide, double-finger right slide, double-finger up slide, double-finger down slide, three-finger single click, three-finger double click, three-finger long press, three-finger left slide, three-finger right slide, three-finger up slide, three-finger down slide, stretch, shrink, left rotation, and right rotation, as shown in fig. 5.
For example, after the user clicks the "end screen recording" control, the mobile phone may further display an interface as shown in fig. 6, where a floating window 610 of the screen recording video appears, if the user clicks the floating window 610, the mobile phone may play the screen recording video, and if the user double clicks the floating window 610, the mobile phone may display a screen capturing record page as shown in fig. 7, where the screen capturing record page includes N video frames respectively corresponding to N second inputs selected from the screen recording video, that is, N screen capturing, and the N screen capturing may be displayed in a classified manner according to the input types of the second inputs.
Further, based on the user selection, it is also possible to switch to the display classified according to the application program as shown in fig. 8 or the display not classified as shown in fig. 9.
The classifying display may specifically be performed on N screenshots according to application programs corresponding to the N screenshots respectively.
Further, without sorting the display, the N shots may be displayed in order of time from first to last. In the case of a classified display, a plurality of shots corresponding to a single category may be displayed in order of time from first to last.
Therefore, the N screenshots are displayed in a classified mode according to the input type of the second input, and a user can conveniently and rapidly find out the required screenshots.
In an alternative embodiment, after step 120, the method may further include:
receiving a sixth input of a user to a sixth screenshot of the N shots;
in response to the sixth input, the sixth screenshot is updated to a video frame in the video recording adjacent to the sixth screenshot.
Here, the sixth screenshot may be any one of N shots. The video frame adjacent to the sixth screenshot in the video may be a previous frame of the video frame corresponding to the sixth screenshot or a subsequent frame of the video frame corresponding to the sixth screenshot.
Specifically, the sixth input may be an input of updating the sixth screenshot to a video frame preceding the video frame corresponding to the sixth screenshot in the video recording, or an input of updating the sixth screenshot to a video frame following the video frame corresponding to the sixth screenshot in the video recording.
For example, after the user clicks the sixth screenshot in the screenshot recording interface, the mobile phone may display the interface as shown in fig. 10, expand and display the sixth screenshot, and display the "last second screenshot" control 1010 and the "next second screenshot" control 1020, if the user clicks the "last second screenshot" control 1010, the sixth screenshot may be updated to the screenshot of the sixth screenshot of the last second, and if the user clicks the "next second screenshot" control 1020, the sixth screenshot may be updated to the screenshot of the sixth screenshot of the next second.
In an alternative embodiment, after step 120, the method may further include:
displaying a first screenshot of the N shots, which may be at least one of the N shots;
displaying a mark control, wherein the mark controls can comprise at least two of a region mark control, a gesture mark control and a character mark control;
receiving a third input of a user to a target mark control in the mark controls;
responding to the third input, marking the target object in the first screenshot, and obtaining a second screenshot;
the target object may be an object corresponding to the target mark control;
when the target mark control includes a region mark control, the target object may include a region identifier, where the region identifier may indicate an input position of an input corresponding to the first screenshot in the first video frame, and the first screenshot may correspond to the first video frame;
In the case where the target mark control includes a gesture mark control, the target object may include a gesture identifier, which may indicate an input gesture of the input corresponding to the first screenshot;
in the case where the target mark control includes a character mark control, the target object may include text, which may indicate voice information obtained when the input corresponding to the first screenshot is received.
Here, the target mark control may include at least one of a plurality of mark controls.
Specifically, the input gesture of the input corresponding to each first screenshot may include, but is not limited to, one of a single-finger single click, a single-finger double click, a single-finger long press, a single-finger left slide, a single-finger up slide, a single-finger down slide, a double-finger single click, a double-finger double click, a double-finger long press, a double-finger left slide, a double-finger right slide, a double-finger up slide, a double-finger down slide, a three-finger single click, a three-finger double click, a three-finger long press, a three-finger left slide, a three-finger right slide, a three-finger up slide, a three-finger down slide, an extension, a contraction, a left rotation, and a right rotation.
The third input may be an input selecting a target mark control from a plurality of mark controls. The third input may be, for example, at least one of a click region markup control, a gesture markup control, and a character markup control.
Illustratively, as shown in fig. 9, the user selects at least one screenshot, i.e., the first screenshot, in the screenshot recording interface, and clicks the character marker control 910, the region marker control 920, and the gesture marker control 930, and then clicks the "save screenshot" control 940, the second electronic device may mark the text, the region identifier, the gesture marker, etc. in the first screenshot, resulting in a second screenshot, which may then be displayed and saved.
In an alternative embodiment, the input position of the second input may be recorded in the screen recording process, and in the case that the target mark control includes the area mark control, the input position of the second input corresponding to the first screenshot may be searched from the input positions of the second input recorded in the screen recording process, and the input position of the second input corresponding to the first screenshot is marked in the first screenshot by using the area mark.
Wherein for a second input of single, double, and long presses, the second electronic device may identify a hot zone of the interactive control, i.e., the input location, in response to the second input and mark the hot zone with a frame that does not blend in with the background color. Illustratively, as shown in FIG. 11, the region identification 1120 may be used to mark the input location of a single click "m+" control input.
For a second input that slides up, down, left, and right, the second electronic device may identify a hot zone of the interactive control that is responsive to the second input and mark the hot zone with a frame that does not blend in with the background color; if the second input does not respond to any interactive control, marking the sliding path of the second input by a line segment which does not blend in the background color without marking. Illustratively, as shown in FIG. 13, the input location of the left slide input may be marked using a region identification 1310.
For a second input that extends, contracts, rotates left, rotates right, the second electronic device can identify a hot zone of the interactive control responsive to the input and mark the hot zone with a frame that does not blend in with the background color; if the second input does not respond to any interactive control, marking the sliding path of the second input by two line segments which are not fused with the background color without marking, and marking the operation of each finger respectively. Illustratively, as shown in FIG. 14, the input location of the shrink input may be marked using a region identifier 1410.
In addition, the color of the region identifier may be selected according to a preset priority, for example, the priority of red is preset to be 5, the priority of blue is preset to be 4, when the color is selected, if red is not fused in the background of the first screenshot, the color of the region identifier may be determined to be red, if red is fused in the background of the first screenshot, whether blue is fused in the background of the first screenshot may be determined, if not, the color of the region identifier may be determined to be blue, and if yes, the determination on the color with a lower priority may be continued.
Illustratively, as shown in fig. 11, in the first screenshot, the input location of the second input may be marked with a region identification 1120.
In an alternative embodiment, the input gesture of the second input may be recorded in the screen recording process, and in the case that the target mark control includes the gesture mark control, the input gesture of the second input corresponding to the first screenshot may be searched for from the input gestures of the second input recorded in the screen recording process, and the input gesture of the second input corresponding to the first screenshot is marked in the first screenshot by using the gesture identification.
For the second input of single click, double click and long press, the second electronic device can recognize the screen coordinates responding to the input, search the gesture identification corresponding to the second input from the pre-stored gesture identifications, and align the screen coordinates of the second input with the center of the finger in the gesture identification. For example, as shown in FIG. 11, an input gesture clicking on the "m+" control may be marked using gesture identification 1130.
For the second input of up, down, left and right, the second electronic device may identify a screen start coordinate in response to the second input, search for a gesture identifier corresponding to the second input from pre-stored gesture identifiers, and align the screen start coordinate of the second input with a finger center in the gesture identifier. For example, as shown in fig. 13, the input gesture of the left slide input may be marked using gesture identification 1320.
For a second input of stretching, shrinking, left rotating and right rotating, the second electronic device can recognize the starting point coordinates of the double-finger on the screen responding to the second input, take the midpoint coordinates of the two starting point coordinates, search the gesture identification corresponding to the second input from the pre-stored gesture identifications, and align the midpoint coordinates with the central point of the double-finger in the gesture identification. For example, as shown in FIG. 14, a pinch input gesture may be marked using gesture identification 1420.
For example, as shown in fig. 11, in the first screenshot, the input gesture of the second input may be marked using gesture identification 1130.
Therefore, texts, region identifications, gesture marks and the like can be marked in the screenshots through selecting the mark control without manual marking, and a plurality of screenshots can be marked in batches at the same time without marking the screenshots one by one, so that the marking efficiency is improved.
In an alternative embodiment, the target marking control may include a character marking control, where in response to the third input, marking the target object in the first screenshot, and obtaining the second screenshot may include:
in response to the third input, the target object is marked in the first screenshot and the third screenshot, resulting in a plurality of second shots.
Here, the target object may be text, which may indicate the first sentence;
the voice information received at the input time corresponding to the third screenshot and the voice information received at the input time corresponding to the first screenshot may form a first sentence conforming to semantics.
Specifically, voice information can be recorded in the screen recording process, and in the case that the target mark control comprises a character mark control, the voice information can be converted into a first text through a voice recognition technology, then the first text can be divided based on punctuation marks, and each punctuation mark is used as the end of a sentence. For each sentence, the text corresponding to the sentence may be marked in one or more shots covered by the speech information corresponding to the sentence. The screenshot covered by the voice information may refer to one or more shots covered by the time period corresponding to the voice information. The screenshot covered by the voice information corresponding to the first sentence may include a first screenshot and a third screenshot.
For example, the voice information corresponding to the sentence a lasts for two seconds, and the second input during the two seconds relates to the screenshot a, the screenshot B and the screenshot C, and then the text corresponding to the sentence a may be marked in each of the screenshot a, the screenshot B and the screenshot C.
For example, a first screenshot after marking text may be as shown in FIG. 11, with the marked text being text 1110.
In addition, in order to ensure the visibility of the text marked in the screenshot, if the screenshot is a dark background, the text may be marked in the screenshot in a black-in-white style, and if the screenshot is a light background, the text may be marked in the screenshot in a black-in-white style. In addition, the opacity of the background of the text may also be set, for example, the opacity may be 60%.
For example, if the first screenshot is a light background, the first screenshot after marking the text may be as shown in fig. 11; if the first screenshot is a dark background, the first screenshot after marking the text can be as shown in fig. 12.
Therefore, the text can be automatically marked for the screenshot based on the voice information recorded in the screen recording process, manual input and addition by a user are not needed, the operation is concise, and the efficiency is high.
In an alternative embodiment, text may be displayed directly below the gesture identification; if the gesture mark is not available, the text can be displayed right below the area mark; if the gesture identification and the area identification do not exist, the text can be displayed at the lower third of the first screenshot; if the space below the gesture identification and the area identification is insufficient to place the text, the text can be displayed at the lower half third of the first screenshot on the premise of avoiding the gesture identification and the area identification.
In an alternative embodiment, after marking the target object in the first screenshot in response to the third input, the method may further include, after obtaining the second screenshot:
acquiring at least one marked fourth screenshot and at least one unmarked fifth screenshot, wherein the at least one fourth screenshot can comprise a second screenshot;
determining a target sequence according to the sequence of video frames corresponding to at least one fourth screenshot and at least one fifth screenshot in the video;
synthesizing at least one fourth screenshot and at least one fifth screenshot according to a target sequence to obtain a target video;
the sending target video is sent to the first electronic device.
Here, the target order may be an order of video frames corresponding to at least one fourth screenshot and at least one fifth screenshot from first to last in the video recording. The first electronic device may be an electronic device of another user.
For example, the marked at least one fourth screenshot and the unmarked at least one fifth screenshot may be shots that introduce a flow of operation of the handset. And sequencing the marked at least one fourth screenshot and the unmarked at least one fifth screenshot according to the target sequence and synthesizing the target video, so that the target video introducing the operation flow of the mobile phone can be obtained. The target video may then also be sent to the first electronic device of the other user as a guide.
Therefore, the video can be generated based on a plurality of screenshots rapidly, a user does not need to manually make the video based on the screenshots, and the method is more efficient and convenient.
To better describe the overall solution, based on the above embodiments, as a specific example, as shown in fig. 15, the screenshot generating method may include steps 1501 to 1513, which are explained in detail below.
Step 1501, a fourth input to begin a screen recording is received.
In step 1502, a screen recording is initiated in response to the fourth input, the second input and the voice information being recorded during the screen recording.
Step 1503, receiving a first input ending the screen recording;
in step 1504, a video recording and N screenshots corresponding to N second inputs are acquired in response to the first input.
In step 1505, the N shots are classified and displayed according to the input types of the second inputs corresponding to the N shots.
Step 1506, displaying the first screenshot of the N shots.
At step 1507, the markup control is displayed.
At step 1508, a third input from the user to a target one of the markup controls is received.
In step 1509, in response to the third input, the target object is marked in the first screenshot, resulting in a second screenshot.
At step 1510, at least one fourth screenshot after marking and at least one fifth screenshot without marking are obtained.
Step 1511, determining a target sequence according to the sequence of the video frames corresponding to the at least one fourth screenshot and the at least one fifth screenshot in the video.
And step 1512, synthesizing at least one fourth screenshot and at least one fifth screenshot according to the target sequence to obtain the target video.
At step 1513, the sending target video is sent to the first electronic device.
Therefore, under the condition of screen recording, the screen recording video and N screenshots corresponding to N second inputs received in the screen recording process can be received and responded to the first input of the user, and the N screenshots are video frames corresponding to the N second inputs in the screen recording video. Therefore, a plurality of screenshots can be obtained through single input, the operation is simple, and the efficiency is high.
According to the screenshot generating method provided by the embodiment of the application, the execution main body can be the screenshot generating device. In the embodiment of the application, taking the screenshot generating device executing the screenshot generating method as an example, the screenshot generating device provided in the embodiment of the application is described.
Based on the same inventive concept, the application also provides a screenshot generating device. The screenshot generating apparatus provided in the embodiment of the present application is described in detail below with reference to fig. 16.
Fig. 16 is a block diagram illustrating a structure of a screenshot generating apparatus according to an exemplary embodiment.
As shown in fig. 16, the screenshot generating apparatus 1600 may include:
a first receiving module 1601, configured to receive a first input of a user in a case of screen recording;
a first obtaining module 1602, configured to obtain, in response to a first input, a video recording and N shots corresponding to N second inputs, where N is an integer greater than 1;
the N second inputs are inputs received in the screen recording process, and the N screenshot are video frames corresponding to the N second inputs in the screen recording video.
The screenshot generating apparatus 1600 will be described in detail, and is specifically as follows:
in one embodiment, the screenshot generating apparatus 1600 may further include:
the first display module is used for classifying and displaying the N screen shots according to the input types of the second inputs respectively corresponding to the N screen shots after the screen-recorded video and the N screen shots corresponding to the N second inputs are acquired in response to the first input.
In one embodiment, the screenshot generating apparatus 1600 may further include:
the second display module is used for displaying a first screenshot in the N screenshots after responding to the first input and acquiring the screen-recorded video and the N screenshots corresponding to the N second inputs, wherein the first screenshot is at least one of the N screenshots;
The third display module is used for displaying mark controls, and the plurality of mark controls comprise at least two of a region mark control, a gesture mark control and a character mark control;
the second receiving module is used for receiving a third input of a user to a target mark control in the mark controls;
the marking module is used for responding to the third input and marking the target object in the first screenshot to obtain a second screenshot;
the target object is an object corresponding to the target mark control;
when the target mark control is a region mark control, the target object is a region mark, and the region mark indicates the input position of the input corresponding to the first screenshot in the first video frame, and the first screenshot corresponds to the first video frame;
when the target mark control is a gesture mark control, the target object is a gesture mark, and the gesture mark indicates an input gesture of the input corresponding to the first screenshot;
and under the condition that the target mark control is a character mark control, the target object is text, and the text indicates the voice information acquired when the input corresponding to the first screenshot is received.
In one embodiment, the target mark control is a character mark control, and the mark module may include:
The marking sub-module is used for responding to the third input, marking target objects in the first screenshot and the third screenshot to obtain a plurality of second shots, wherein the target objects are texts, and the texts indicate the first sentences;
the voice information received at the input time corresponding to the third screenshot and the voice information received at the input time corresponding to the first screenshot form a first sentence conforming to the semantics.
In one embodiment, the screenshot generating apparatus 1600 may further include:
the second acquisition module is used for marking the target object in the first screenshot in response to the third input, acquiring at least one marked fourth screenshot and at least one unmarked fifth screenshot after the second screenshot is obtained, wherein the at least one fourth screenshot comprises the second screenshot;
the determining module is used for determining a target sequence according to the sequence of the video frames corresponding to the at least one fourth screenshot and the at least one fifth screenshot in the video;
the synthesizing module is used for synthesizing at least one fourth screenshot and at least one fifth screenshot according to the target sequence to obtain a target video;
and the sending module is used for sending the target video to the first electronic equipment.
Therefore, under the condition of screen recording, the screen recording video and N screenshots corresponding to N second inputs received in the screen recording process can be received and responded to the first input of the user, and the N screenshots are video frames corresponding to the N second inputs in the screen recording video. Therefore, a plurality of screenshots can be obtained through single input, the operation is simple, and the efficiency is high.
The screenshot generating device in the embodiment of the application may be an electronic device, or may be a component in the electronic device, for example, an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be a mobile phone, tablet computer, notebook computer, palm computer, vehicle-mounted electronic device, mobile internet appliance (Mobile Internet Device, MID), augmented reality (augmented reality, AR)/Virtual Reality (VR) device, robot, wearable device, ultra-mobile personal computer, UMPC, netbook or personal digital assistant (personal digital assistant, PDA), etc., but may also be a server, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (TV), teller machine or self-service machine, etc., and the embodiments of the present application are not limited in particular.
The screenshot generating device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiments of the present application.
The screenshot generating device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 15, and achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
Optionally, as shown in fig. 17, the embodiment of the present application further provides an electronic device 1700, including a processor 1701 and a memory 1702, where a program or an instruction capable of being executed on the processor 1701 is stored in the memory 1702, and the program or the instruction is executed by the processor 1701 to implement each step of the above-mentioned screenshot generating method embodiment, and the same technical effect can be achieved, so that repetition is avoided, and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 18 is a schematic hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1800 includes, but is not limited to: a radio frequency unit 1801, a network module 1802, an audio output unit 1803, an input unit 1804, a sensor 1805, a display unit 1806, a user input unit 1807, an interface unit 1808, a memory 1809, and a processor 1810.
Those skilled in the art will appreciate that the electronic device 1800 may also include a power source (e.g., a battery) for powering the various components, which may be logically connected to the processor 1810 by a power management system, such as to perform functions such as managing charge, discharge, and power consumption by the power management system. The electronic device structure shown in fig. 18 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in the drawings, or may combine some components, or may be arranged in different components, which will not be described in detail herein.
Wherein, the user input unit 1807 is configured to receive a first input of a user in the case of screen recording;
a processor 1810 configured to obtain, in response to a first input, a video recording and N shots corresponding to N second inputs, N being an integer greater than 1;
the N second inputs are inputs received in the screen recording process, and the N screenshot are video frames corresponding to the N second inputs in the screen recording video.
Therefore, under the condition of screen recording, the screen recording video and N screenshots corresponding to N second inputs received in the screen recording process can be received and responded to the first input of the user, and the N screenshots are video frames corresponding to the N second inputs in the screen recording video. Therefore, a plurality of screenshots can be obtained through single input, the operation is simple, and the efficiency is high.
Optionally, the processor 1810 is further configured to, after acquiring the video and N shots corresponding to the N second inputs in response to the first input, display the N shots in a classified manner according to input types of the second inputs corresponding to the N shots respectively.
Therefore, the N screenshots are displayed in a classified mode according to the input type of the second input, and a user can conveniently and rapidly find out the required screenshots.
Optionally, the display unit 1806 is configured to, after acquiring the video on the screen and N shots corresponding to the N second inputs in response to the first input, display a first shot of the N shots, where the first shot is at least one of the N shots;
the display unit 1806 is further configured to display a markup control, where the plurality of markup controls includes at least two of a region markup control, a gesture markup control, and a character markup control;
the user input unit 1807 is further configured to receive a third input from the user to a target mark control in the mark controls;
the processor 1810 is further configured to mark the target object in the first screenshot in response to the third input, to obtain a second screenshot;
the target object is an object corresponding to the target mark control;
when the target mark control comprises a region mark control, the target object comprises a region mark, the region mark indicates the input position of the input corresponding to the first screenshot in the first video frame, and the first screenshot corresponds to the first video frame;
when the target mark control comprises a gesture mark control, the target object comprises a gesture mark, and the gesture mark indicates an input gesture of the input corresponding to the first screenshot;
in the case where the target mark control includes a character mark control, the target object includes text indicating voice information acquired when an input corresponding to the first screenshot is received.
Therefore, texts, region identifications, gesture marks and the like can be marked in the screenshots through selecting the mark control without manual marking, and a plurality of screenshots can be marked in batches at the same time without marking the screenshots one by one, so that the marking efficiency is improved.
Optionally, the target mark control includes a character mark control, and the processor 1810 is further configured to mark a target object in the first screenshot and the third screenshot in response to the third input, to obtain a plurality of second shots, where the target object is a text, and the text indicates the first sentence;
the voice information received at the input time corresponding to the third screenshot and the voice information received at the input time corresponding to the first screenshot form a first sentence conforming to the semantics.
Therefore, the text can be automatically marked for the screenshot based on the voice information recorded in the screen recording process, manual input and addition by a user are not needed, the operation is concise, and the efficiency is high.
Optionally, the processor 1810 is further configured to, in response to the third input, mark the target object in the first screenshot, and obtain, after the second screenshot is obtained, at least one marked fourth screenshot and at least one unmarked fifth screenshot, where the at least one fourth screenshot includes the second screenshot;
Determining a target sequence according to the sequence of video frames corresponding to at least one fourth screenshot and at least one fifth screenshot in the video;
synthesizing at least one fourth screenshot and at least one fifth screenshot according to a target sequence to obtain a target video;
the sending target video is sent to the first electronic device.
Therefore, the video can be generated based on a plurality of screenshots rapidly, a user does not need to manually make the video based on the screenshots, and the method is more efficient and convenient.
It should be appreciated that in embodiments of the present application, the input unit 1804 may include a graphics processor (Graphics Processing Unit, GPU) 18041 and a microphone 18042, with the graphics processor 18041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 1806 may include a display panel 18061, which may be configured in the form of a liquid crystal display, organic light emitting diodes, or the like, for the display panel 18061. The user input unit 1807 includes at least one of a touch panel 18071 and other input devices 18072. Touch panel 18071, also referred to as a touch screen. Touch panel 18071 may include two parts, a touch detection device and a touch controller. Other input devices 18072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein.
The memory 1809 may be used to store software programs and various data. The memory 1809 may mainly include a first memory area storing programs or instructions and a second memory area storing data, wherein the first memory area may store an operating system, application programs or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1809 may include volatile memory or nonvolatile memory, or the memory 1809 may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM), static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (ddr SDRAM), enhanced SDRAM (Enhanced SDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DRRAM). Memory 1809 in embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The processor 1810 may include one or more processing units; optionally, the processor 1810 integrates an application processor that primarily processes operations involving an operating system, user interface, application programs, and the like, and a modem processor that primarily processes wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 1810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored, and when the program or the instruction is executed by a processor, the processes of the above embodiment of the screenshot generating method are implemented, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as read-only memory, random access memory, magnetic disk or optical disk.
The embodiment of the application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled with the processor, and the processor is used for running a program or an instruction, so as to implement each process of the embodiment of the screenshot generating method, and achieve the same technical effect, so that repetition is avoided, and no redundant description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
The embodiments of the present application provide a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the embodiments of the screenshot generating method, and achieve the same technical effects, and are not described herein for avoiding repetition.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the methods described in the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those of ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are also within the protection of the present application.

Claims (12)

1. A method of generating a screenshot, the method comprising:
receiving a first input of a user under the condition of screen recording;
responding to the first input, acquiring a screen-recorded video and N screen shots corresponding to N second inputs, wherein N is an integer greater than 1;
the N second inputs are inputs received in the screen recording process, and the N screenshots are video frames in the screen recording video, which correspond to the N second inputs respectively.
2. The method of claim 1, wherein after the obtaining of the video and the N shots corresponding to the N second inputs in response to the first input, the method further comprises:
and classifying and displaying the N screenshots according to the input types of the second input respectively corresponding to the N screenshots.
3. The method of claim 1, wherein after the obtaining of the video and the N shots corresponding to the N second inputs in response to the first input, the method further comprises:
displaying a first screenshot of the N screenshots, wherein the first screenshot is at least one of the N screenshots;
displaying mark controls, wherein the mark controls comprise at least two of a region mark control, a gesture mark control and a character mark control;
Receiving a third input of a user to a target mark control in the mark controls;
responding to the third input, marking a target object in the first screenshot, and obtaining a second screenshot;
the target object is an object corresponding to the target mark control;
when the target mark control comprises a region mark control, the target object comprises a region mark, the region mark indicates an input position of an input corresponding to the first screenshot in a first video frame, and the first screenshot corresponds to the first video frame;
when the target mark control comprises a gesture mark control, the target object comprises a gesture mark, and the gesture mark indicates an input gesture of the input corresponding to the first screenshot;
in the case that the target mark control includes a character mark control, the target object includes text indicating voice information acquired when an input corresponding to the first screenshot is received.
4. The method of claim 3, wherein the target-marking control comprises a character-marking control, the marking the target object in the first screenshot in response to the third input resulting in a second screenshot comprising:
Marking target objects in the first screenshot and the third screenshot in response to the third input to obtain a plurality of second shots, wherein the target objects are texts, and the texts indicate first sentences;
the voice information received at the input time corresponding to the third screenshot and the voice information received at the input time corresponding to the first screenshot form the first sentence conforming to the semantics.
5. A method according to claim 3, wherein after said marking a target object in a first screenshot in response to said third input, resulting in a second screenshot, the method further comprises:
acquiring at least one marked fourth screenshot and at least one unmarked fifth screenshot, wherein the at least one fourth screenshot comprises the second screenshot;
determining a target sequence according to the sequence of the video frames corresponding to the at least one fourth screenshot and the at least one fifth screenshot in the video;
synthesizing the at least one fourth screenshot and the at least one fifth screenshot according to a target sequence to obtain a target video;
and sending the target video to the first electronic equipment.
6. A screenshot generating apparatus, the apparatus comprising:
The first receiving module is used for receiving a first input of a user under the condition of screen recording;
the first acquisition module is used for responding to the first input, acquiring a screen-recorded video and N screen shots corresponding to N second inputs, wherein N is an integer greater than 1;
the N second inputs are inputs received in the screen recording process, and the N screenshots are video frames in the screen recording video, which correspond to the N second inputs respectively.
7. The apparatus of claim 6, wherein the apparatus further comprises:
and the first display module is used for classifying and displaying the N screen shots according to the input types of the second inputs respectively corresponding to the N screen shots after the screen-recorded video and the N screen shots corresponding to the N second inputs are acquired in response to the first input.
8. The apparatus of claim 6, wherein the apparatus further comprises:
the second display module is used for displaying a first screenshot of the N screenshots after the response to the first input and the acquisition of the screen-recorded video and N screenshots corresponding to N second inputs, wherein the first screenshot is at least one of the N screenshots;
The third display module is used for displaying the marking controls, and the marking controls comprise at least two of a regional marking control, a gesture marking control and a character marking control;
the second receiving module is used for receiving a third input of a user to a target mark control in the mark controls;
the marking module is used for marking the target object in the first screenshot in response to the third input to obtain a second screenshot;
the target object is an object corresponding to the target mark control;
when the target mark control is a region mark control, the target object is a region mark, the region mark indicates an input position of the input corresponding to the first screenshot in a first video frame, and the first screenshot corresponds to the first video frame;
when the target mark control is a gesture mark control, the target object is a gesture mark, and the gesture mark indicates an input gesture of the input corresponding to the first screenshot;
and under the condition that the target mark control is a character mark control, the target object is text, and the text indicates voice information acquired when the input corresponding to the first screenshot is received.
9. The apparatus of claim 8, wherein the target markup control is a character markup control, the markup module comprising:
the marking submodule is used for responding to the third input, marking target objects in the first screenshot and the third screenshot to obtain a plurality of second shots, wherein the target objects are texts, and the texts indicate first sentences;
the voice information received at the input time corresponding to the third screenshot and the voice information received at the input time corresponding to the first screenshot form the first sentence conforming to the semantics.
10. The apparatus of claim 8, wherein the apparatus further comprises:
the second obtaining module is used for obtaining at least one marked fourth screenshot and at least one unmarked fifth screenshot after the target object is marked in the first screenshot in response to the third input to obtain the second screenshot, wherein the at least one fourth screenshot comprises the second screenshot;
the determining module is used for determining a target sequence according to the sequence of the video frames corresponding to the at least one fourth screenshot and the at least one fifth screenshot in the recorded video;
The synthesizing module is used for synthesizing the at least one fourth screenshot and the at least one fifth screenshot according to a target sequence to obtain a target video;
and the sending module is used for sending the target video to the first electronic equipment.
11. An electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the screenshot generating method of any one of claims 1 to 5.
12. A readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the screenshot generating method according to any one of claims 1 to 5.
CN202311308386.8A 2023-10-10 2023-10-10 Screenshot generation method and device, electronic equipment and storage medium Pending CN117331474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311308386.8A CN117331474A (en) 2023-10-10 2023-10-10 Screenshot generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311308386.8A CN117331474A (en) 2023-10-10 2023-10-10 Screenshot generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117331474A true CN117331474A (en) 2024-01-02

Family

ID=89282661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311308386.8A Pending CN117331474A (en) 2023-10-10 2023-10-10 Screenshot generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117331474A (en)

Similar Documents

Publication Publication Date Title
CN112306347B (en) Image editing method, image editing device and electronic equipment
CN113253883A (en) Application interface display method and device and electronic equipment
WO2024160133A1 (en) Image generation method and apparatus, electronic device, and storage medium
CN113037925B (en) Information processing method, information processing apparatus, electronic device, and readable storage medium
CN112181252B (en) Screen capturing method and device and electronic equipment
CN115412634B (en) Message display method and device
WO2023036115A1 (en) Text content selection method and apparatus
CN107862728B (en) Picture label adding method and device and computer readable storage medium
CN113190365B (en) Information processing method and device and electronic equipment
CN116302234A (en) Display method, display device, electronic equipment and medium
CN115543169A (en) Identification display method and device, electronic equipment and readable storage medium
CN117331474A (en) Screenshot generation method and device, electronic equipment and storage medium
KR20150097250A (en) Sketch retrieval system using tag information, user equipment, service equipment, service method and computer readable medium having computer program recorded therefor
CN112764551A (en) Vocabulary display method and device and electronic equipment
CN113157966A (en) Display method and device and electronic equipment
CN104375884A (en) Information processing method and electronic equipment
CN112685126B (en) Document content display method and device
CN112860165B (en) Text information acquisition method and device
CN114143454B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN111833283B (en) Data processing method and device and electronic equipment
CN116225272A (en) Display method and device and electronic equipment
CN115131649A (en) Content identification method and device and electronic equipment
CN117633273A (en) Image display method, device, equipment and readable storage medium
CN115357168A (en) Screen capturing method and device
CN118132183A (en) Guide information display method, program control method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination