CN112764621A - Screenshot method and device and electronic equipment - Google Patents

Screenshot method and device and electronic equipment Download PDF

Info

Publication number
CN112764621A
CN112764621A CN202110100026.3A CN202110100026A CN112764621A CN 112764621 A CN112764621 A CN 112764621A CN 202110100026 A CN202110100026 A CN 202110100026A CN 112764621 A CN112764621 A CN 112764621A
Authority
CN
China
Prior art keywords
input
user
target
screen capture
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110100026.3A
Other languages
Chinese (zh)
Other versions
CN112764621B (en
Inventor
钱昔勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110100026.3A priority Critical patent/CN112764621B/en
Publication of CN112764621A publication Critical patent/CN112764621A/en
Application granted granted Critical
Publication of CN112764621B publication Critical patent/CN112764621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Navigation (AREA)

Abstract

The application discloses a screenshot method, a screenshot device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: receiving a first input of a user on an electronic map in the condition of displaying the electronic map; in response to the first input, determining a starting position and an end position according to input parameters of the first input; displaying a target map route according to the starting position and the end position; outputting a screen capture image containing the target map route in a case where a screen capture operation of the user is received; and only target objects within a preset distance range from each position point on the route of the target map are displayed in the screen capture image. The method and the device can reduce screenshot operation steps of a user, delete a lot of useless information in the output screenshot image, reduce the occupation of the screenshot image on the memory, and improve the utilization rate of the system memory.

Description

Screenshot method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a screenshot method, a screenshot device and electronic equipment.
Background
With the continuous development of science and technology, electronic devices (such as mobile phones, tablet computers and the like) have gradually become an indispensable tool in life and work of people.
In practical application, when a user shares a map route, the user needs to capture a screenshot of the map route and then send the screenshot to other users. However, in the existing screen capture technology, if the screen cannot display a route, the user needs to manually reduce and adjust the size of the map until all positions are displayed, and then the screenshot is sent, which is troublesome in operation mode, and the reduced map route is unclear under the condition of an excessively long route. If the definition of the map needs to be ensured if the route needs to be seen clearly, a user can capture the map by adopting a long screen capture mode, and the captured picture also occupies a large memory.
Disclosure of Invention
The embodiment of the application aims to provide a screenshot method, a screenshot device and electronic equipment, and can solve the problems that an existing map route screenshot mode is complex to operate, and a captured image occupies a large memory.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a screenshot method, including:
receiving a first input of a user on an electronic map in the condition of displaying the electronic map;
in response to the first input, determining a starting position and an end position according to input parameters of the first input;
displaying a target map route according to the starting position and the end position;
outputting a screen capture image containing the target map route in a case where a screen capture operation of the user is received; and only target objects within a preset distance range from each position point on the route of the target map are displayed in the screen capture image.
In a second aspect, an embodiment of the present application provides a screenshot device, where the screenshot device includes:
the first input receiving module is used for receiving a first input of a user on the electronic map under the condition of displaying the electronic map;
the starting and stopping position determining module is used for responding to the first input and determining a starting position and a finishing position according to input parameters of the first input;
the target route display module is used for displaying a target map route according to the starting position and the end position;
the screen capture image output module is used for outputting a screen capture image containing the target map route under the condition of receiving the screen capture operation of the user; and only target objects within a preset distance range from each position point on the route of the target map are displayed in the screen capture image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the screenshot method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a program or instructions are stored, and when the program or instructions are executed by a processor, the program or instructions implement the steps of the screenshot method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the screenshot method according to the first aspect.
In the embodiment of the application, a first input of a user on the electronic map is received in the condition of displaying the electronic map, the starting position and the end position are determined according to the input parameters of the first input in response to the first input, a target map route is displayed according to the starting position and the end position, and a screen capture image containing the target map route is output in the condition of receiving a screen capture operation of the user, wherein only target objects within a preset distance range from each position point on the target map route are displayed in the screen capture image. According to the screen capture method and device, the initial position and the terminal position are determined by combining with the user input, the user does not need to adjust the scaling of the electronic map, operation steps of the user are reduced, a lot of useless information is deleted from the generated screen capture image, occupation of the screen capture image on the memory is reduced, and the utilization rate of the system memory is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a screenshot method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a screenshot shortcut bar provided in an embodiment of the present application;
fig. 3 is a schematic diagram of obtaining a start position and an end position according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a map route according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a screenshot provided in an embodiment of the present application;
fig. 6 is a schematic diagram of obtaining a session location according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a session location according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a reason for heading to a target location according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram of another screenshot provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of a screenshot image with a further starting and ending position according to an embodiment of the present application;
FIG. 11 is a schematic diagram of an enlarged screenshot image provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a screenshot device provided in an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The screenshot method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through a specific embodiment and an application scenario thereof.
Referring to fig. 1, a flowchart illustrating steps of a screenshot method provided in an embodiment of the present application is shown, and as shown in fig. 1, the screenshot method may specifically include the following steps:
step 101: in the case of displaying an electronic map, a first input of a user on the electronic map is received.
The method and the device for capturing the route on the electronic map can be applied to scenes for capturing the route on the electronic map.
The first input refers to an input performed by a user on the electronic map for determining a start position and an end position of the route.
In some examples, the first input may be a text input performed by the user, for example, a text input box corresponding to the start position and the end position is displayed on the electronic map page, and the user may input corresponding text in the text input box to obtain the start position and the end position, at which time, an operation of inputting text in the text input box by the user may be regarded as the first input.
In some examples, the first input may be an input formed by a user clicking a marked position on the electronic map, for example, a plurality of positions are marked on the electronic map, and the user clicks a position mark to obtain an end position.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
In the case of displaying the electronic map, a first input of the user on the electronic map may be received, and then, step 102 is performed.
Step 102: in response to the first input, a start position and an end position are determined according to input parameters of the first input.
After receiving a first input of a user on the electronic map, a start position and an end position can be determined according to input parameters of the first input in response to the first input. For example, where the first input is a text input by a user within a text entry box on an electronic map, the start and end positions may be determined from text (input parameters) entered by the user within the text entry box. When the first input is a click operation executed by a user on the electronic map, the starting position and the ending position can be determined according to positions clicked successively by the user on the electronic map. When the first input is a voice input performed by the user, the start position and the end position, etc. may be determined based on voice parameters in the voice input by the user.
The specific form of the input parameter for the first input may be determined according to the form of the first input, and the embodiment of the present application is not limited thereto.
After determining the start position and the end position based on the input parameters of the first input, step 103 is performed.
Step 103: and displaying a target map route according to the starting position and the end position.
The target map route refers to a walking route generated according to a start position and an end position, and it can be understood that there may be one map route or multiple map routes for one start position and one end position, and specifically, it may be determined according to actual situations, which is not limited in this embodiment.
After the start position and the end position are determined according to the first input parameter, a target map route may be generated according to the start position and the end position, and the target map route is displayed on the electronic map, as shown in fig. 4, where the start position is a current position of the user, and the end position is a bamboo utensil for holding cooked rice favorite music bars, and at this time, the target map route may be displayed on the electronic map, as shown in fig. 4, where the target map route is a graphic bending line.
After the target map route is displayed according to the start position and the end position, step 104 is performed.
Step 104: outputting a screen capture image containing the target map route in a case where a screen capture operation of the user is received; and only target objects within a preset distance range from each position point on the route of the target map are displayed in the screen capture image.
After the target map route is generated, a screen capture operation on the electronic map can be performed by the user, after the screen capture operation of the user is received, a screen capture image containing the target map route can be generated and output, and only target objects (such as objects such as buildings and the like) within a preset distance range from each position point on the target map route are displayed in the screen capture image, as shown in fig. 4 and 5, only buildings near the target map route are contained in the generated screen capture image, and other buildings far away from the target map route in fig. 4 can be removed, at this time, the memory occupied by the generated screen capture image can be saved.
According to the embodiment of the application, a lot of useless information is deleted from the output screenshot image, so that the occupation of the screenshot image on the memory is reduced, and the utilization rate of the system memory is improved.
In the present embodiment, the start position and the end position may be determined by text input by a user in a text input box displayed on the electronic map, and in particular, may be described in detail in conjunction with the following specific implementation.
In a specific implementation manner of the present application, the step 101 may include:
substep A1: receiving text input by the user within a text box displayed on the electronic map.
The step 102 may include:
substep B1: in response to the text input, determining the starting position and the ending position from the input text in the text box.
In the present embodiment, the text input refers to an input formed by an operation of a user inputting a text in a text box.
In a specific implementation, if a user wants to capture a map and share the map with friends of the user, the user may mark out a shortcut key column at the bottom of the electronic device, select a map capture screen column, for example, a column circled in a lower border of a screen in fig. 2, and after the electronic device receives a request for capturing a map, the electronic device may first determine software for capturing a current map. If the software is not opened currently, the electronic device will automatically pop up the map software with default settings, and pop up the input box with the starting location, as shown in fig. 3, a text box is displayed on the upper boundary of the electronic map, i.e. the upper text input box and the lower text input box shown in fig. 3, the upper text box can be used as the text input box at the starting location, and the lower text box can be used as the text input box at the ending location.
After the text box is displayed, a text input of the user in the text box may be received, and then, in response to the text input, a start position and an end position may be determined according to the input text in the text box, for example, as shown in fig. 3, the start position is my position, that is, the position where the user is currently located, and the end position is: a bamboo utensil for holding cooked rice dining music bar.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
According to the method and the device, the route is customized by inputting the starting position by the user, and subsequent screen capture operation is performed, so that the selection speed of the starting position and the end position can be increased, and the user experience is improved.
In this embodiment, the positions mentioned in the user session content may also be marked on the electronic map, and the subsequent start position and end position may be selected by the user, which may be described in detail in conjunction with the following specific implementation manner.
In another specific implementation manner of the present application, the step 101 may include:
substep C1: and in the process of the conversation between the user and other users, acquiring the conversation position input by the user and/or other users within a preset time length from the current time.
In this embodiment, the session position refers to a position input by the user and/or the session user during the session between the user and other users.
In the process of a conversation between a user and other users, a conversation position input by the user and/or other users within a preset time from the current time can be obtained, as shown in fig. 6, in the process of the conversation between the user and other users, "with pure" is input, and at this time, a shop with pure "can be used as the conversation position.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
After the session location is acquired, sub-step C2 is performed.
Substep C2: and displaying the electronic map, and marking the conversation position on the electronic map.
After the session location is acquired, an electronic map may be displayed and the session location marked on the electronic map, and as shown in fig. 7, after the session location is acquired as "in pure", the electronic map may be displayed and the corresponding location "in pure" marked on the electronic map as a solid circle.
Substep C3: receiving touch input of the user on the marked conversation position on the electronic map.
The touch input refers to an input performed by a user on an electronic map for selecting a location of a markup conversation.
After marking the conversation location on the electronic map, a user touch input to the marked conversation location on the electronic map may be received, thereby performing sub-step D1.
The step 102 may include:
substep D1: and responding to the touch input, determining a target conversation position in the conversation positions according to the touch position of the touch input, taking the current position of the user as a starting position, and taking the target conversation position as an end position.
After receiving a touch input of the user to the marked conversation position on the electronic map, determining a target conversation position in the conversation position according to the touch position of the touch input in response to the touch input, and taking the current position of the user as a starting position and the target conversation position as an end position.
According to the embodiment of the application, in the conversation process of the user, the conversation position is marked on the electronic map in time according to the conversation position input by the user and/or the conversation user, so that the starting position and the ending position can be selected by the user in the subsequent process, and the experience of the user can be improved by the starting and ending position selection mode.
In this embodiment, the user may further adjust the end point position, and specifically, the detailed description may be given in conjunction with the following specific implementation manner.
In another specific implementation manner of the present application, after the step 102, the method may further include:
step S1: and receiving a third input of the starting position or the end position by the user.
In the embodiment of the present application, the third input refers to an input performed by the user with respect to the start position or the end position for canceling the marking of the end position on the electronic map.
In this example, the third input may be a click input, a double click input, or the like, and may specifically be determined according to a business requirement, which is not limited in this embodiment.
After the start position and the end position are determined, the start position and the end position may be displayed on the electronic map, at which time the start position or the end position may be adjusted by the user to perform a third input on the end position on the electronic map by the user.
After receiving the third input of the start position or the end position by the user, step S2 is performed.
Step S2: in response to the third input, undoing the marking of the start location or the end location on the electronic map.
After receiving a third input by the user to the start position or the end position, the marking of the start position or the end position on the electronic map may be undone in response to the third input.
After the mark of the start position or the end position on the electronic map is canceled, step S3 is performed.
Step S3: receiving a fourth input of the user on the electronic map.
The fourth input refers to an input for selecting the adjusted start position or end position performed on the electronic map.
In some examples, the fourth input may be an input formed by a user clicking a location on the electronic map. For example, when the user needs to use the position a on the electronic map as the adjusted end position, the user may click an icon corresponding to the position a on the electronic map, and at this time, the operation of clicking the position a by the user may be regarded as the fourth input.
In some examples, the fourth input may be a voice input performed by the user, for example, when the user needs to take the position B on the electronic map as the adjusted starting position, a piece of voice for adjusting the starting position may be input by the user, such as "take the position B as the starting position", and the like, and at this time, the voice input performed by the user may be regarded as the fourth input.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation of the embodiments of the present application.
After receiving the fourth input of the user on the electronic map, step S4 is executed.
Step S4: and responding to the fourth input, determining a first position according to the input parameters of the fourth input, and taking the first position as an adjusted starting position or an adjusted end position.
The input parameter of the fourth input may be a click parameter, a voice parameter, and the like, and specifically, may be determined according to a specific form of the fourth input, which is not limited in this embodiment of the present application.
After receiving a fourth input of the user on the electronic map, the first position may be determined according to the input parameters of the fourth input in response to the fourth input, and the first position may be determined as the adjusted start position or end position.
Of course, in this embodiment, both the start position and the end position may be adjusted, the start position may be adjusted first and then the end position may be adjusted according to the above-described manner, or the end position may be adjusted first and then the start position may be adjusted according to the above-described manner, and specifically, the method may be determined according to a service requirement, which is not limited in this embodiment of the present application.
According to the embodiment of the application, the automatic adjustment of the starting position and the ending position can be realized through the displayed electronic map, the adjustment mode is simple, and the operation flow of adjusting the starting position and the ending position by a user is reduced.
In this embodiment, in addition to displaying target objects within a preset distance range from each position point on the route of the target map on the electronic map, objects at positions marked by the user may be displayed on the electronic map, and in particular, the detailed description may be made in conjunction with the following specific implementation manner.
In another specific implementation manner of the present application, before the step 104, the method may further include:
step E1: acquiring a marked position marked on the electronic map by the user; the marked position is a position which is out of a preset distance range from each position point on the route of the target map.
The step 104 may include:
sub-step F1: generating a screen capture image containing the target map route under the condition that screen capture operation of the user is received; the screen capture image comprises a target object which is within a preset distance range from each position point on the route of the target map and a position object corresponding to the mark position.
In this embodiment, when the user needs to mark the desired location on the electronic map, the desired location may be marked on the electronic map, and then the marked location marked on the electronic map by the user may be obtained, where the marked location is a location outside a preset distance range from each location point on the route of the target map.
Furthermore, when a screen capture operation of the user is received, a screen capture image including the target map route may be generated and output, and the screen capture image may include a target object within a preset distance range from each position point on the target map route and a position object corresponding to the mark position. As shown in fig. 8, when the user marks the location corresponding to the "incoming call" on the electronic map, a location object, such as a building, corresponding to the "incoming call" location may be displayed on the generated screenshot image.
According to the embodiment of the application, the generated screenshot image comprises the position of the user mark, so that the personalized mark of the position required by the user can be realized, and the user experience is improved.
In this embodiment, the user may also perform an input on the location identifier to display a reason why the user goes to a certain location in the generated screenshot image, and specifically, the detailed description may be given in conjunction with the following specific implementation manner.
In another specific implementation manner of the present application, displaying a location identifier corresponding to the target location on the electronic map, before the step 104, may further include:
step H1: and receiving a second input of the position identification by the user.
In this embodiment, a position identifier corresponding to the target position is displayed on the electronic map, as shown in fig. 8, the target position is a position corresponding to the "incoming call", and the position identifier is a solid circle identifier.
The second input refers to an input performed by the user on the location identification for the user to add reason information for the user to travel to the target location.
After receiving the second input of the location identity by the user, step H2 is performed.
Step H2: and responding to the second input, generating text label information corresponding to the position identification according to the input parameters of the second input, and displaying the text label information on the electronic map.
After receiving a second input of the user to the location identifier, generating text annotation information corresponding to the location identifier according to an input parameter of the second input in response to the second input, displaying the text annotation information on the electronic map, and displaying the text annotation information on the output screenshot image. As shown in fig. 8 and 9, text label information "power supply by charger" and the like is displayed at the "incoming call" position.
In this example, the text annotation information may be used to describe a reason why the user goes to the target location, such as "borrow a charger baby" and the like, and may also be used to describe other information, such as who the user goes to the target location, where the user goes after leaving the target location, and the like, specifically, the text annotation information may be determined according to business requirements, and the specific content of the text annotation information is not limited in this embodiment.
According to the embodiment of the application, the annotation information is displayed on the generated screenshot image, so that the user can be correspondingly prompted, and the use experience of the user is improved.
Of course, when the two positions where the user needs to capture the screen are far away, the captured image can keep the information of the large road to help the user judge the direction, as shown in fig. 10. If the user wants to obtain detailed map information, the user can directly obtain detailed data by directly magnifying the picture, as shown in fig. 11, the user can directly obtain the information as shown in fig. 11 by directly sliding the starting point for a certain distance, and thus the user can clearly see the route.
The map screen shot of the embodiment of the application can keep the information related to the route, and the user can directly see the information of each building. The user can see the general route by zooming out, and can see the details by zooming in, so that the user can know the route information clearly.
According to the screenshot method provided by the embodiment of the application, a first input of a user on an electronic map is received under the condition that the electronic map is displayed, the first input is responded, a starting position and an end position are determined according to input parameters of the first input, a target map route is displayed according to the starting position and the end position, and a screenshot image containing the target map route is output under the condition that the screenshot operation of the user is received, wherein only target objects which are within a preset distance range from each position point on the target map route are displayed in the screenshot image. According to the screen capture method and device, the initial position and the terminal position are determined by combining with the user input, the user does not need to adjust the scaling of the electronic map, operation steps of the user are reduced, a lot of useless information is deleted from the generated screen capture image, occupation of the screen capture image on the memory is reduced, and the utilization rate of the system memory is improved.
It should be noted that, in the screenshot method provided in the embodiment of the present application, the execution subject may be a screenshot device, or a control module in the screenshot device for executing the screenshot method. In the embodiment of the present application, a screenshot device is taken as an example to execute a screenshot method, which illustrates the screenshot device provided in the embodiment of the present application.
Referring to fig. 12, a schematic structural diagram of a screenshot device provided in an embodiment of the present application is shown, and as shown in fig. 12, the screenshot device 1200 may specifically include the following modules:
a first input receiving module 1210, configured to receive a first input of a user on an electronic map if the electronic map is displayed;
a start-stop position determining module 1220, configured to determine, in response to the first input, a start position and an end position according to input parameters of the first input;
a target route display module 1230, configured to display a target map route according to the start position and the end position;
a screen capture image output module 1240 for outputting a screen capture image including the target map route in a case where the screen capture operation of the user is received; and only target objects within a preset distance range from each position point on the route of the target map are displayed in the screen capture image.
Optionally, the first input receiving module 1210 comprises:
a session position acquiring unit, configured to acquire a session position input by the user and/or another user within a preset time from a current time in a session process between the user and the other user;
the conversation position marking unit is used for displaying the electronic map and marking the conversation position on the electronic map;
a touch input receiving unit for receiving a touch input of the user to a marked conversation position on the electronic map;
the start-stop position determining module 1220 includes:
and the starting and stopping position determining unit is used for responding to the touch input, determining a target conversation position in the conversation positions according to the touch position of the touch input, taking the current position of the user as a starting position, and taking the target conversation position as an end position.
Optionally, the method further comprises:
the third input receiving module is used for receiving a third input of the user to the starting position or the end position;
a start and stop position mark canceling module, configured to cancel, in response to the third input, a mark of the start position or the end position on the electronic map;
the fourth input receiving module is used for receiving a fourth input of the user on the electronic map;
and the starting and stopping position acquisition module is used for responding to the fourth input, determining a first position according to the input parameters of the fourth input, and taking the first position as the adjusted starting position or the adjusted ending position.
Optionally, the method further comprises:
the marked position acquisition module is used for acquiring the marked position marked on the electronic map by the user; the marking position is a position which is out of a preset distance range from each position point on the route of the target map;
the screen capture image output module 1240 includes:
a screen capture image output unit for outputting a screen capture image containing the target map route in a case where a screen capture operation by the user is received; the screen capture image comprises a target object which is within a preset distance range from each position point on the route of the target map and a position object corresponding to the mark position.
Optionally, the method further comprises:
the second input receiving module is used for receiving second input of the user to the position identification;
the label information display module is used for responding to the second input, generating text label information corresponding to the position identification according to the input parameter of the second input, and displaying the text label information on the electronic map;
and the text annotation information is displayed on the screen capture image.
The screenshot device provided by the embodiment of the application receives a first input of a user on an electronic map under the condition that the electronic map is displayed, responds to the first input, determines a starting position and an end position according to input parameters of the first input, displays a target map route according to the starting position and the end position, and outputs a screenshot image containing the target map route under the condition that a screenshot operation of the user is received, wherein only target objects within a preset distance range from each position point on the target map route are displayed in the screenshot image. According to the screen capture method and device, the initial position and the terminal position are determined by combining with the user input, the user does not need to adjust the scaling of the electronic map, operation steps of the user are reduced, a lot of useless information is deleted from the generated screen capture image, occupation of the screen capture image on the memory is reduced, and the utilization rate of the system memory is improved.
The screenshot device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The screenshot device in the embodiment of the present application may be a device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The screenshot device provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 13, an electronic device 1300 is further provided in an embodiment of the present application, and includes a processor 1301, a memory 1302, and a program or an instruction stored in the memory 1302 and capable of running on the processor 1301, where the program or the instruction is executed by the processor 1301 to implement each process of the screenshot method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1400 includes, but is not limited to: radio unit 1401, network module 1402, audio output unit 1403, input unit 1404, sensor 1405, display unit 1406, user input unit 1407, interface unit 1408, memory 1409, and processor 1410.
Those skilled in the art will appreciate that the electronic device 1400 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1410 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the processor 1410 is configured to control the user input unit 1407 to receive a first input of a user on the electronic map in a case that the electronic map is displayed; in response to the first input, determining a starting position and an end position according to input parameters of the first input; controlling the display unit 1406 to display a target map route according to the start position and the end position; outputting a screen capture image containing the target map route in a case where a screen capture operation of the user is received; and only target objects within a preset distance range from each position point on the route of the target map are displayed in the screen capture image.
According to the method and the device, the user does not need to adjust the zoom scale of the electronic map, the operation steps of the user are reduced, a lot of useless information is deleted from the generated screenshot image, the occupation of the screenshot image on the memory is reduced, and the utilization rate of the system memory is improved.
Optionally, the processor 1410 is further configured to, in a process of a session between the user and another user, obtain a session position input by the user and/or another user within a preset time from a current time; controlling the display unit 1406 to display the electronic map and mark the conversation position on the electronic map; controlling a user input unit 1407 to receive a touch input of the user on the marked conversation position on the electronic map; and responding to the touch input, determining a target conversation position in the conversation positions according to the touch position of the touch input, taking the current position of the user as a starting position, and taking the target conversation position as an end position.
Optionally, the processor 1410 is further configured to control the user input unit 1407 to receive a third input of the start position or the end position from the user; in response to the third input, undoing the marking of the start location or the end location on the electronic map; controlling the user input unit 1407 to receive a fourth input of the user on the electronic map; and responding to the fourth input, determining a first position according to the input parameters of the fourth input, and taking the first position as an adjusted starting position or an adjusted end position.
Optionally, the processor 1410 is further configured to obtain a marked position marked on the electronic map by the user; the marking position is a position which is out of a preset distance range from each position point on the route of the target map; outputting a screen capture image containing the target map route in a case where a screen capture operation of the user is received; the screen capture image comprises a target object which is within a preset distance range from each position point on the route of the target map and a position object corresponding to the mark position.
Optionally, the processor 1410 is further configured to control the user input unit 1407 to receive a second input of the location identifier by the user; in response to the second input, generating text label information corresponding to the location identifier according to the input parameters of the second input, and controlling the display unit 1406 to display the text label information on the electronic map; and the text annotation information is displayed on the screen capture image.
According to the embodiment of the application, the reason that the text marking information is displayed on the electronic map to prompt the user to go to the destination position can be used for reminding the user in time, and the user is prevented from forgetting.
It should be understood that in the embodiment of the present application, the input Unit 1404 may include a Graphics Processing Unit (GPU) 14041 and a microphone 14042, and the Graphics processor 14041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1406 may include a display panel 14061, and the display panel 14061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1407 includes a touch panel 14071 and other input devices 14072. Touch panel 14071, also referred to as a touch screen. The touch panel 14071 may include two parts of a touch detection device and a touch controller. Other input devices 14072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1409 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. The processor 1410 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1410.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the screenshot method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the screenshot method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method of screenshot, comprising:
receiving a first input of a user on an electronic map in the condition of displaying the electronic map;
in response to the first input, determining a starting position and an end position according to input parameters of the first input;
displaying a target map route according to the starting position and the end position;
outputting a screen capture image containing the target map route in a case where a screen capture operation of the user is received; and only target objects within a preset distance range from each position point on the route of the target map are displayed in the screen capture image.
2. The method of claim 1, wherein receiving a first input of a user on the electronic map while displaying the electronic map comprises:
in the process of conversation between the user and other users, acquiring conversation positions input by the user and/or other users within a preset time length from the current time;
displaying the electronic map and marking the conversation position on the electronic map;
receiving touch input of the user on the marked conversation position on the electronic map;
the determining, in response to the first input, a start position and an end position according to the input parameters of the first input includes:
and responding to the touch input, determining a target conversation position in the conversation positions according to the touch position of the touch input, taking the current position of the user as a starting position, and taking the target conversation position as an end position.
3. The method of claim 1, further comprising, after determining a start position and an end position based on the input parameters of the first input:
receiving a third input of the starting position or the end position by a user;
in response to the third input, undoing the marking of the start location or the end location on the electronic map;
receiving a fourth input of the user on the electronic map;
and responding to the fourth input, determining a first position according to the input parameters of the fourth input, and taking the first position as an adjusted starting position or an adjusted end position.
4. The method according to claim 1, wherein before the outputting of the screen shot image containing the target map route in the case where the screen shot operation of the user is received, further comprises:
acquiring a marked position marked on the electronic map by the user; the marking position is a position which is out of a preset distance range from each position point on the route of the target map;
the outputting a screen capture image containing the target map route in a case where a screen capture operation of the user is received includes:
outputting a screen capture image containing the target map route in a case where a screen capture operation of the user is received; the screen capture image comprises a target object which is within a preset distance range from each position point on the route of the target map and a position object corresponding to the mark position.
5. The method according to claim 1, wherein a position identifier corresponding to a target position is displayed on the electronic map, and before outputting a screen capture image containing the route of the target map when the screen capture operation of the user is received, the method further comprises:
receiving a second input of the position identification by the user;
responding to the second input, generating text label information corresponding to the position identification according to the input parameters of the second input, and displaying the text label information on the electronic map;
and the text annotation information is displayed on the screen capture image.
6. A screenshot device, comprising:
the first input receiving module is used for receiving a first input of a user on the electronic map under the condition of displaying the electronic map;
the starting and stopping position determining module is used for responding to the first input and determining a starting position and a finishing position according to input parameters of the first input;
the target route display module is used for displaying a target map route according to the starting position and the end position;
the screen capture image output module is used for outputting a screen capture image containing the target map route under the condition of receiving the screen capture operation of the user; and only target objects within a preset distance range from each position point on the route of the target map are displayed in the screen capture image.
7. The apparatus of claim 6, wherein the first input receiving module comprises:
a session position acquiring unit, configured to acquire a session position input by the user and/or another user within a preset time from a current time in a session process between the user and the other user;
the conversation position marking unit is used for displaying the electronic map and marking the conversation position on the electronic map;
a touch input receiving unit for receiving a touch input of the user to a marked conversation position on the electronic map;
the start-stop position determination module comprises:
and the starting and stopping position determining unit is used for responding to the touch input, determining a target conversation position in the conversation positions according to the touch position of the touch input, taking the current position of the user as a starting position, and taking the target conversation position as an end position.
8. The apparatus of claim 6, further comprising:
the third input receiving module is used for receiving a third input of the user to the starting position or the end position;
a start and stop position mark canceling module, configured to cancel, in response to the third input, a mark of the start position or the end position on the electronic map;
the fourth input receiving module is used for receiving a fourth input of the user on the electronic map;
and the starting and stopping position acquisition module is used for responding to the fourth input, determining a first position according to the input parameters of the fourth input, and taking the first position as the adjusted starting position or the adjusted ending position.
9. The apparatus of claim 6, further comprising:
the marked position acquisition module is used for acquiring the marked position marked on the electronic map by the user; the marking position is a position which is out of a preset distance range from each position point on the route of the target map;
the screen capture image output module includes:
a screen capture image output unit for outputting a screen capture image containing the target map route in a case where a screen capture operation by the user is received; the screen capture image comprises a target object which is within a preset distance range from each position point on the route of the target map and a position object corresponding to the mark position.
10. The apparatus of claim 6, further comprising:
the second input receiving module is used for receiving second input of the user to the position identification;
the label information display module is used for responding to the second input, generating text label information corresponding to the position identification according to the input parameter of the second input, and displaying the text label information on the electronic map;
and the text annotation information is displayed on the screen capture image.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the screenshot method as claimed in any one of claims 1-5.
12. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the screenshot method according to any one of claims 1-5.
CN202110100026.3A 2021-01-25 2021-01-25 Screenshot method and device and electronic equipment Active CN112764621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110100026.3A CN112764621B (en) 2021-01-25 2021-01-25 Screenshot method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110100026.3A CN112764621B (en) 2021-01-25 2021-01-25 Screenshot method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112764621A true CN112764621A (en) 2021-05-07
CN112764621B CN112764621B (en) 2022-07-15

Family

ID=75707291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110100026.3A Active CN112764621B (en) 2021-01-25 2021-01-25 Screenshot method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112764621B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101578501A (en) * 2007-01-10 2009-11-11 通腾科技股份有限公司 Navigation device and method
US20100332119A1 (en) * 2008-03-14 2010-12-30 Tom Tom International B.V. Navigation device and method
CN106446083A (en) * 2016-09-09 2017-02-22 珠海市魅族科技有限公司 Route indication method and mobile terminal
CN107464215A (en) * 2017-07-31 2017-12-12 努比亚技术有限公司 A kind of image processing method and terminal based on electronic map
CN107678648A (en) * 2017-09-27 2018-02-09 北京小米移动软件有限公司 Screenshotss processing method and processing device
CN109556621A (en) * 2017-09-27 2019-04-02 腾讯科技(深圳)有限公司 A kind of method and relevant device of route planning
CN109806585A (en) * 2019-02-19 2019-05-28 网易(杭州)网络有限公司 Display control method, device, equipment and the storage medium of game
US20190304006A1 (en) * 2018-03-28 2019-10-03 Spot It Ltd. System and method for web-based map generation
CN110440825A (en) * 2019-07-31 2019-11-12 维沃移动通信有限公司 It is a kind of apart from display methods and terminal
CN110795010A (en) * 2019-10-12 2020-02-14 维沃移动通信有限公司 Screen capturing method and terminal equipment thereof
CN111813300A (en) * 2020-06-03 2020-10-23 深圳市鸿合创新信息技术有限责任公司 Screen capture method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101578501A (en) * 2007-01-10 2009-11-11 通腾科技股份有限公司 Navigation device and method
US20100332119A1 (en) * 2008-03-14 2010-12-30 Tom Tom International B.V. Navigation device and method
CN106446083A (en) * 2016-09-09 2017-02-22 珠海市魅族科技有限公司 Route indication method and mobile terminal
CN107464215A (en) * 2017-07-31 2017-12-12 努比亚技术有限公司 A kind of image processing method and terminal based on electronic map
CN107678648A (en) * 2017-09-27 2018-02-09 北京小米移动软件有限公司 Screenshotss processing method and processing device
CN109556621A (en) * 2017-09-27 2019-04-02 腾讯科技(深圳)有限公司 A kind of method and relevant device of route planning
US20190304006A1 (en) * 2018-03-28 2019-10-03 Spot It Ltd. System and method for web-based map generation
CN109806585A (en) * 2019-02-19 2019-05-28 网易(杭州)网络有限公司 Display control method, device, equipment and the storage medium of game
CN110440825A (en) * 2019-07-31 2019-11-12 维沃移动通信有限公司 It is a kind of apart from display methods and terminal
CN110795010A (en) * 2019-10-12 2020-02-14 维沃移动通信有限公司 Screen capturing method and terminal equipment thereof
CN111813300A (en) * 2020-06-03 2020-10-23 深圳市鸿合创新信息技术有限责任公司 Screen capture method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAO KE ET AL.: "Based on point cloud and texture data generation algorithm of two-dimensional images", 《 2016 16TH INTERNATIONAL SYMPOSIUM ON COMMUNICATIONS AND INFORMATION TECHNOLOGIES (ISCIT)》 *
徐红: "2020 IEEE 6th International Conference on Computer and Communications (ICCC)", 《经济日报》 *

Also Published As

Publication number Publication date
CN112764621B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
US11784951B1 (en) Determining contextually relevant application templates associated with electronic message content
EP3407189B1 (en) Application distribution method and device
CN112306607A (en) Screenshot method and device, electronic equipment and readable storage medium
CN112817676A (en) Information processing method and electronic device
CN112449110B (en) Image processing method and device and electronic equipment
CN113067983A (en) Video processing method and device, electronic equipment and storage medium
CN112099714B (en) Screenshot method and device, electronic equipment and readable storage medium
CN111651110A (en) Group chat message display method and device, electronic equipment and storage medium
CN112269522A (en) Image processing method, image processing device, electronic equipment and readable storage medium
EP3770763B1 (en) Method and device for presenting information on a terminal
CN113873165A (en) Photographing method and device and electronic equipment
CN111813305A (en) Application program starting method and device
CN114374761A (en) Information interaction method and device, electronic equipment and medium
CN112134987B (en) Information processing method and device and electronic equipment
CN112698762B (en) Icon display method and device and electronic equipment
CN112286611B (en) Icon display method and device and electronic equipment
CN114217754A (en) Screen projection control method and device, electronic equipment and storage medium
CN113836089A (en) Application program display method and device, electronic equipment and readable storage medium
CN107168969A (en) A kind of page elements control method, device and electronic equipment
CN112764611A (en) Application program control method and device and electronic equipment
CN112887488A (en) Caller identification method and device and electronic equipment
CN107145361A (en) Wallpaper displaying method and device
EP4351117A1 (en) Information display method and apparatus, and electronic device
CN112764621B (en) Screenshot method and device and electronic equipment
CN115586937A (en) Interface display method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant