WO2021082711A1 - 图像显示方法及电子设备 - Google Patents

图像显示方法及电子设备 Download PDF

Info

Publication number
WO2021082711A1
WO2021082711A1 PCT/CN2020/112693 CN2020112693W WO2021082711A1 WO 2021082711 A1 WO2021082711 A1 WO 2021082711A1 CN 2020112693 W CN2020112693 W CN 2020112693W WO 2021082711 A1 WO2021082711 A1 WO 2021082711A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
preview image
electronic device
thumbnail
image
Prior art date
Application number
PCT/CN2020/112693
Other languages
English (en)
French (fr)
Inventor
王晓菲
朱宗伟
Original Assignee
维沃移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信有限公司 filed Critical 维沃移动通信有限公司
Publication of WO2021082711A1 publication Critical patent/WO2021082711A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the embodiments of the present application relate to the field of communication technologies, and in particular, to an image display method and electronic equipment.
  • the user zooming in on the preview image on the shooting preview interface as an example.
  • the user needs to zoom in on the target object in the preview image by adjusting the zoom factor of the image.
  • the preview image may only include the partial image of the target object, or even not include the image of the target object.
  • the user may need to move the electronic device so that the image of the target object is included in the imaging area.
  • the zoom factor is too large, or there are many objects similar to the target object in the shooting scene, the operation of the user to control the imaging area including the image of the target object is more complicated. For example, the user may need to move the electronic device repeatedly, and may even need to reduce the zoom factor again, reposition the magnification position, and re-enlarge the preview image, so that the image of the target object can be included in the imaging area.
  • the embodiments of the present application provide an image display method and an electronic device to solve the problem of low operation efficiency of zooming in and previewing a target object in a conventional technology.
  • an embodiment of the present application provides an image display method applied to an electronic device.
  • the method includes: receiving a user's first input in the case of displaying a first preview image; and in response to the first input, displaying the first input Second, a preview image and a thumbnail corresponding to the first preview image.
  • the second preview image is an image obtained by enlarging a partial area of the first preview image.
  • the thumbnail includes an imaging frame, which is used to indicate that the partial area is in The position in the first preview image.
  • an embodiment of the present application provides an electronic device, the electronic device includes: a receiving module and a display module; the receiving module is configured to receive a user's first input when the first preview image is displayed; the The display module is configured to display a second preview image and a thumbnail corresponding to the first preview image in response to the first input received by the receiving module, the second preview image being an image obtained by enlarging a partial area of the first preview image ,
  • the thumbnail includes an imaging frame, and the imaging frame is used to indicate the position of the partial area in the first preview image.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor.
  • the computer program is executed by the processor, the following The steps of the image display method in one aspect.
  • an embodiment of the present application provides a computer-readable storage medium storing a computer program on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image display method in the first aspect are implemented.
  • the electronic device receives the first input of the user while displaying the first preview image; in response to the first input, displays the second preview image and the thumbnail corresponding to the first preview image (ie
  • the image content of the first preview image is the same as the image content of the thumbnail, and the size of the first preview image is proportional to the size of the thumbnail)
  • the second preview image is an image obtained by enlarging a partial area of the first preview image
  • the thumbnail includes an imaging frame, which is used to indicate the position of the partial area in the first preview image (that is, the image content enclosed by the imaging frame in the thumbnail is the same as the image content of the second preview image, that is, The position of the second preview image in the first preview image can be determined according to the position of the imaging frame in the thumbnail).
  • the user can determine the position of the second preview image in the first preview image according to the position of the imaging frame in the thumbnail, and the user can determine the target object according to the position of the target object that the user needs to zoom in in the thumbnail
  • the position in the first preview image so that the user can determine how to input according to the positional relationship between the imaging frame and the target object in the thumbnail, and can quickly and accurately obtain the preview image containing the target object (that is, enlarge the image of the target object), and then
  • the operation efficiency of zooming in on the target object in the preview image can be improved.
  • FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the application
  • FIG. 2 is one of the flowcharts of the image display method provided by the embodiment of the application.
  • FIG. 3 is one of the schematic diagrams of the interface of the image display method provided by the embodiment of the application.
  • FIG. 5 is the third flowchart of the image display method provided by an embodiment of the application.
  • Fig. 6(a) is the second schematic diagram of the interface of the image display method provided by the embodiment of the application.
  • FIG. 6(b) is the third schematic diagram of the interface of the image display method provided by the embodiment of this application.
  • FIG. 7 is the fourth flow chart of the image display method provided by the embodiment of the application.
  • FIG. 8 is the fifth flowchart of the image display method provided by an embodiment of the application.
  • FIG. 9 is a sixth flowchart of an image display method provided by an embodiment of the application.
  • FIG. 10 is the fourth schematic diagram of the interface of the image display method provided by an embodiment of the application.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 12 is a schematic diagram of hardware of an electronic device provided by an embodiment of the application.
  • first”, “second”, “third”, and “fourth” in the specification and claims of this application are used to distinguish different objects, rather than describing a specific order of objects.
  • first input, the second input, the third input, and the fourth input are used to distinguish different inputs, rather than to describe the specific order of the inputs.
  • words such as “exemplary” or “for example” are used as examples, illustrations, or illustrations. Any embodiment or design solution described as “exemplary” or “for example” in the embodiments of the present application should not be construed as being more preferable or advantageous than other embodiments or design solutions. To be precise, words such as “exemplary” or “for example” are used to present related concepts in a specific manner.
  • multiple refers to two or more than two, for example, multiple processing units refers to two or more processing units; multiple elements Refers to two or more elements, etc.
  • An embodiment of the present application provides an image display method.
  • an electronic device displays a first preview image
  • receives a first input from a user in response to the first input, it displays a second preview image and an image corresponding to the first preview image.
  • Thumbnail that is, the image content of the first preview image is the same as the image content of the thumbnail, and the size of the first preview image is proportional to the size of the thumbnail
  • the second preview image is to enlarge the partial area of the first preview image
  • the obtained image, the thumbnail includes an imaging frame, and the imaging frame is used to indicate the position of the partial area in the first preview image (that is, the image content enclosed by the imaging frame in the thumbnail is the same as the image content of the second preview image, That is, the position of the second preview image in the first preview image can be determined according to the position of the imaging frame in the thumbnail).
  • the user can determine the position of the second preview image in the first preview image according to the position of the imaging frame in the thumbnail, and the user can determine the target object according to the position of the target object that the user needs to zoom in in the thumbnail
  • the position in the first preview image so that the user can determine how to input according to the positional relationship between the imaging frame and the target object in the thumbnail, and can quickly and accurately obtain the preview image containing the target object (that is, enlarge the image of the target object), and then
  • the operation efficiency of zooming in on the target object in the preview image can be improved.
  • the following takes the Android operating system as an example to introduce the software environment to which the image display method provided in the embodiments of the present application is applied.
  • FIG. 1 it is a schematic structural diagram of a possible Android operating system provided by an embodiment of this application.
  • the architecture of the Android operating system includes 4 layers, namely: application layer, application framework layer, system runtime library layer, and kernel layer (specifically, it may be the Linux kernel layer).
  • the application layer includes various applications (including system applications and third-party applications) in the Android operating system.
  • the application framework layer is the framework of the application. Developers can develop some applications based on the application framework layer while complying with the development principles of the application framework.
  • the system runtime library layer includes a library (also called a system library) and an Android operating system runtime environment.
  • the library mainly provides various resources needed by the Android operating system.
  • the Android operating system operating environment is used to provide a software environment for the Android operating system.
  • the kernel layer is the operating system layer of the Android operating system and belongs to the lowest level of the Android operating system software.
  • the kernel layer is based on the Linux kernel to provide core system services and hardware-related drivers for the Android operating system.
  • a developer can develop a software program that implements the image display method provided by the embodiment of the application based on the system architecture of the Android operating system as shown in FIG. 1, so that the image The display method can be run based on the Android operating system as shown in FIG. 1. That is, the processor or the electronic device can implement the image display method provided in the embodiment of the present application by running the software program in the Android operating system.
  • the electronic device in the embodiment of the present application may be a mobile electronic device or a non-mobile electronic device.
  • Mobile electronic devices can be mobile phones, tablet computers, notebook computers, handheld computers, vehicle terminals, wearable devices, ultra-mobile personal computers (UMPC), netbooks, or personal digital assistants (personal digital assistants, PDAs), etc.
  • the non-mobile electronic device may be a personal computer (PC), a television (television, TV), a teller machine or a self-service machine, etc.; the embodiment of the application does not specifically limit it.
  • the execution subject of the image display method provided in the embodiments of the present application may be the above-mentioned electronic device (including mobile electronic devices and non-mobile electronic devices), or may be a functional module and/or functional entity in the electronic device that can implement the method, The details can be determined according to actual use requirements, and the embodiments of the present application do not limit it.
  • the following takes an electronic device as an example to illustrate the image display method provided in the embodiment of the present application.
  • an embodiment of the present application provides an image display method, and the method may include the following steps 201 to 202.
  • Step 201 The electronic device receives a user's first input when displaying a first preview image.
  • the image display method provided by the embodiments of the application can be applied to image zooming scenes, specifically, it may include zooming scenes for images stored in electronic devices, or zooming scenes for images displayed on the shooting preview interface, and may also include other image zooming.
  • the scenario is not limited in the embodiment of this application.
  • the images stored in the electronic device may include images stored locally (such as a gallery) of the electronic device, and may also include images cached in an application (such as a shopping application, an instant social application, etc.)
  • the image may also include others, which is not limited in the embodiment of the present application.
  • the first preview image is an image saved in the electronic device.
  • the first input is an input that increases the magnification of the first preview image.
  • the first preview image is the preview image acquired by the camera and displayed in the imaging area when the camera of the electronic device is pointed at the first position and the zoom factor is n.
  • n may be the default zoom factor of the camera, or it may not be the default zoom factor of the camera (larger than the default zoom factor of the camera or smaller than the default zoom factor of the camera), which is not limited in the embodiment of the present application.
  • the value range of n can be determined according to actual usage requirements, and is not limited in the embodiment of the present application. For example, n is a number greater than or equal to 1.
  • the first input is an input to increase the zoom factor.
  • the first input may be the user's click input on the first preview image (or image preview interface), the first input may also be the user's sliding input on the first preview image (or image preview interface), the first input An input can also be other feasibility inputs, which are not limited in the embodiment of the application.
  • the above-mentioned click input may be any number of click input or multi-finger click input, for example, single-click input, double-click input, three-click input, two-finger click input, or three-finger click input;
  • Directional sliding input or multi-finger sliding input for example, upward sliding input, downward sliding input, left sliding input, right sliding input, two-finger sliding input, or three-finger sliding input.
  • Step 202 In response to the first input, the electronic device displays a second preview image and a thumbnail corresponding to the first preview image.
  • the second preview image is an image obtained by enlarging a partial area of the first preview image, and the thumbnail includes an imaging frame for indicating the position of the partial area in the first preview image.
  • the image content in the thumbnail is the same as the image content in the first preview image, and the image content enclosed by the imaging frame in the thumbnail is the same as the image content in the second preview image.
  • the zoom factor when the camera is pointed at the first position, when the user increases the zoom factor, the size of the image content displayed in the imaging area of the electronic device is enlarged, and the range of the displayed image content is reduced; In the process of reducing the magnification, the size of the image content displayed in the imaging area of the electronic device is reduced, and the range of the displayed image content is enlarged.
  • the zoom factor defaults to 1 (usually the smallest zoom factor, represented by 1x). The user can adjust the zoom factor (increase the zoom factor or decrease the zoom factor) by inputting with two fingers on the preview interface. multiple).
  • the image (image content) of the second preview image and the image (image content) of the thumbnail are different.
  • the second preview image is an image displayed in the imaging area when the zoom factor is greater than n.
  • the image displayed in the imaging area is proportional to the thumbnail.
  • this ratio can be denoted as T1.
  • the positional relationship between the imaging frame and the thumbnail is equivalent to the positional relationship between the partial area and the first preview image.
  • the positional relationship between the imaging frame and the thumbnail can be used to indicate the positional relationship between the partial area and the first preview image. If the image content of the second preview image (due to user input, camera movement, etc.) changes, the positional relationship between the imaging frame and the thumbnail always follows the difference between the second preview image (partial area) and the first preview image. The positional relationship changes, and the two always correspond to each other.
  • the image in the imaging frame is the same as the image of the second preview image.
  • the image content enclosed by the imaging frame in the thumbnail is the same as the image content of the second preview image.
  • the imaging frame is shrinking.
  • the range of the image content enclosed in the thumbnail is the same as the imaging range of the imaging area, but the size is proportional (the scale is T1).
  • the image content enclosed by the imaging frame in the thumbnail is always the same as the image content of the preview image displayed in the imaging area, so that the user can intuitively show the result of the mobile phone imaging area. Like the position in the thumbnail.
  • the zoom factor is n
  • the size of the imaging frame and the thumbnail are the same; when the user increases the zoom factor, the thumbnail (always the image obtained when the camera is aligned in the first direction and the zoom factor is n) is not Change, the imaging frame is reduced, and the range of the image content of the preview image displayed in the imaging area is reduced (the size of the image content is enlarged).
  • the user moves the electronic device to change the position of the electronic device or if the camera of the electronic device can automatically move according to the user's input (for example, a telescopic camera, the camera can move when the electronic device is extended out of the housing).
  • the movement includes motion states such as rotation and translation, the image formed by the imaging area of the electronic device will change accordingly, and the position of the imaging frame in the thumbnail will also change accordingly.
  • the mark “1” indicates the second preview image
  • the mark “7x” indicates that the zoom factor of the second preview image is 7
  • the mark “2” indicates the thumbnail.
  • “3” indicates the imaging frame.
  • the description is mainly based on the zoomed-in scene of the image displayed on the shooting preview interface.
  • the zoomed-in scene of the image stored in the electronic device please refer to the zoomed-in scene for the image displayed on the shooting preview interface. Related descriptions will not be repeated here.
  • the embodiment of the present application provides an image display method.
  • the electronic device displays a first preview image
  • receives a first input from a user in response to the first input, it displays a second preview image and corresponds to the first preview image.
  • the second preview image is a partial area of the first preview image
  • the enlarged image, the thumbnail includes an imaging frame, and the imaging frame is used to indicate the position of the partial area in the first preview image (that is, the image content enclosed by the imaging frame in the thumbnail is the same as that of the second preview image That is to say, the position of the second preview image in the first preview image can be determined according to the position of the imaging frame in the thumbnail).
  • the user can determine the position of the second preview image in the first preview image according to the position of the imaging frame in the thumbnail, and the user can determine the target object according to the position of the target object that the user needs to zoom in in the thumbnail
  • the position in the first preview image so that the user can determine how to input according to the positional relationship between the imaging frame and the target object in the thumbnail, and can quickly and accurately obtain the preview image containing the target object (that is, enlarge the image of the target object), and then
  • the operation efficiency of zooming in on the target object in the preview image can be improved.
  • the user can trigger the electronic device to display the preview image including the target object through input.
  • the image display method provided in the embodiment of the present application may further include the following steps 203 to 204.
  • Step 203 In a case where the target object is not included in the second preview image, the electronic device receives a second input from the user.
  • the second input is used to move the imaging frame to the position of the target object in the thumbnail.
  • the second input is used to trigger the electronic device to move the imaging frame to the position of the target object in the thumbnail.
  • the second preview image may include all the target objects, may also include some of the target objects, or may not include the target objects at all.
  • the second preview image includes part of the target object, and the second preview image does not include the target object at all, it is recorded that the second preview image does not include the target object.
  • the second preview image does not include the target object, which means that the target object in the thumbnail is not located in the imaging frame.
  • the user can intuitively feel the position of the target object through the thumbnail, the user can operate the electronic device according to the thumbnail to change the image formed by the imaging area of the electronic device, or the user can directly change the imaging area of the electronic device by operating the thumbnail. Cheng's like.
  • the second input may be an input for the user to move the second preview image displayed on the image preview interface according to the position of the target object in the thumbnail.
  • the second input can be the user according to the position relationship of the target object in the thumbnail (or the target object in the thumbnail And the positional relationship of the imaging frame), the input of the mobile electronic device (the user changes the image formed by the imaging area of the electronic device by moving the electronic device according to the thumbnail); in the case that the camera of the electronic device can move automatically, the first The second input can be the user's input to move the imaging frame to the location of the target object in the thumbnail, and then the electronic device controls the camera to move according to the movement of the imaging frame, or the second input can also be the user's input in the thumbnail
  • the input of the target object in the thumbnail that is, the input of selecting the target object in the thumbnail, the input can be the user's click input, sliding input, etc.) of the target object in the thumbnail (the user changes the imaging of the electronic device by operating the thumbnail The image formed by the area), and then the electronic device controls the camera to move according to
  • the second preview image does not include the target object, and the user needs to take an image that includes the target object (or zoom in the target object in the stored image, hereinafter referred to as zooming in the target object), so the user can use the second input , Trigger the electronic device to include the target object in the preview image displayed in the imaging area.
  • Step 204 In response to the second input, the electronic device updates the third preview image to the third preview image.
  • the target object is included in the third preview image.
  • the electronic device In response to the second input, the electronic device updates the second preview image to a third preview image including the target object, and the user can trigger the electronic device to capture the image including the target object according to the third preview image through the shooting input.
  • the thumbnail is used as a reference for the image formed by the user to adjust the imaging area, and it can be displayed on the preview image as a hover; (Or enlargement of the target object)
  • the electronic device does not need to save the thumbnail, and can automatically delete the thumbnail.
  • the second input is an input for the user to move the second preview image according to the position of the target object in the thumbnail.
  • the second input may also be an input for the user to move the preview image in the direction indicated by the first identifier, and the first identifier is generated by the electronic device according to the position of the target object in the thumbnail.
  • the generation of the first identifier reference may be made to the following description of the generation of the first identifier when the first preview image is an image displayed on the shooting preview interface, which will not be repeated here.
  • the second input is any one of the following: the user moves the electronic device according to the position of the target object in the thumbnail; the user Move the input of the electronic device in the direction indicated by the first identifier, and the first identifier is generated by the electronic device according to the position of the target object in the thumbnail.
  • the second input is an input by the user to move the electronic device according to the position of the target object in the thumbnail. It can be understood that the user triggers the electronic device to move the imaging frame by moving the input of the electronic device according to the position of the target object in the thumbnail (or the positional relationship between the target object and the imaging frame in the thumbnail). To the location of the target object in the thumbnail.
  • the second input is the user's input to move the electronic device in the direction indicated by the first mark. It can be understood that the user's input by moving the electronic device in the direction indicated by the first mark triggers the electronic device to move the imaging frame. Move to the location of the target object in the thumbnail, and the first identifier is that the electronic device generates according to the location of the target object in the thumbnail (or the positional relationship between the target object and the imaging frame in the thumbnail) of.
  • the electronic device when the second input is an input of the user moving the electronic device in the direction indicated by the first identifier, before receiving the second input of the user, the electronic device needs to first generate the first identifier according to the user's input.
  • the user needs to trigger the electronic device to determine the target object in the thumbnail through input, and then generate the first identifier according to the positional relationship between the imaging frame and the target object in the thumbnail; in another possible situation, the user needs to First, the target object is determined in the first preview image by input, and then when the thumbnail is generated, the target object is determined in the thumbnail, and finally the first identifier is generated according to the positional relationship between the imaging frame and the target object in the thumbnail.
  • the image display method provided in the embodiment of the present application may further include the following steps 205 to 206.
  • Step 205 The electronic device receives a third input of the user to the target object in the thumbnail.
  • the third input may be the user's click input on the target object in the thumbnail, or the user's sliding input on the target object in the thumbnail, or the user's input on the target object in the thumbnail.
  • Other feasibility inputs of the target object are not limited in the embodiment of this application.
  • Step 206 In response to the third input, the electronic device generates a first identifier according to the position of the target object in the thumbnail.
  • the electronic device obtains the target object in the thumbnail in response to the third input, and then generates the target object according to the position of the target object in the thumbnail (or the positional relationship between the target object and the imaging frame in the thumbnail) The first logo.
  • the electronic device can generate the first identifier according to the coordinates of A'.
  • the electronic device may calculate the position relationship between A′ and the imaging area according to the position relationship between A and the imaging frame, and then obtain the coordinates of A′, so that the electronic device may generate the first identifier according to the coordinates of A′.
  • the specific embodiments of this application will not be repeated.
  • the first identifier may be a line from the center point of the imaging area to A', the first identifier may also be a line from A to A', or other lines, which are not limited in the embodiment of the present application.
  • the first mark may be a solid line or a dashed line, it may be a ray with an arrow, or a line segment without an arrow, or it may be other, which is not limited in the embodiment of the present application.
  • the first mark is the line from the center point of the imaging area to A'(marked by a solid line in the figure).
  • the first mark is the line from A to A'(marked by a solid line in the figure).
  • the user triggers the electronic device to generate the first identification through the third input of the target object in the thumbnail.
  • the electronic device displays the first identification
  • the user can more intuitively feel how the first identification is based on the first identification Mobile electronic devices, and thus more precise mobile electronic devices.
  • the foregoing step 201 may be specifically implemented by the following steps 201a to 201c, and the foregoing step 202 may be specifically implemented by the following step 202a.
  • Step 201a In the case of displaying the first preview image, the electronic device receives a fourth input of the user to the target object in the first preview image.
  • the fourth input is the input of the user selecting the target object in the first preview image.
  • the fourth input can be the user's click input on the target object in the first preview image, or the user's sliding input on the target object in the first preview image, or other feasibility inputs.
  • the application examples are not limited.
  • Step 201b In response to the fourth input, the electronic device displays the first mark in the first preview image.
  • the first mark is used to indicate the position of the target object in the first preview image.
  • the electronic device displays the first mark at the position of the target object in the first preview image.
  • the first mark can be determined according to actual use requirements, and is not limited in the embodiment of the present application. Exemplarily, the first mark is A'.
  • Step 201c The electronic device receives the first input of the user to increase the zoom factor.
  • the first input can be the user's two-finger input on the first preview image, or can be the user's click input or sliding input on the control for adjusting the zoom factor, or other feasibility inputs.
  • This application The embodiment is not limited.
  • the above-mentioned two-finger input may be a two-finger tap input, a two-finger sliding input, etc.; the above-mentioned tap input may be any number of tap input, such as single-click input, double-click input, or triple-click input; the above sliding input may be Sliding input in any direction, such as upward sliding input, downward sliding input, left sliding input or right sliding input, etc.
  • Step 202a In response to the first input, the electronic device updates the first preview image to the second preview image, displays the thumbnail on the second preview image, and displays the first logo on the second preview image and the thumbnail.
  • the thumbnail also includes a second mark
  • the second mark is used to indicate the position of the target object in the thumbnail (that is, the second mark is displayed at the position of the target image in the thumbnail).
  • the second mark is generated based on the first mark
  • the first mark is generated based on the position of the second mark in the thumbnail (or the positional relationship between the target object and the imaging frame in the thumbnail).
  • the second mark is used to indicate the position of the target object in the thumbnail.
  • the second mark can be determined according to actual usage requirements, and is not limited in the embodiment of the present application.
  • the second mark is A.
  • T1 For the description of T1, reference may be made to the related description of T1 in step 202, which is not repeated here.
  • the electronic device can also obtain the positional relationship between A′ and the imaging area according to the positional relationship between A and the imaging frame.
  • the user first triggers the electronic device to determine the target object on the first preview image through the fourth input before zooming in the zoom factor, and then triggers the electronic device to update the preview image through the first input of increasing the zoom factor ( Update the first preview image to the second preview image), and display the thumbnail.
  • the target object is determined in the thumbnail (display the second mark), and then the target object is displayed according to the target object.
  • the position in the thumbnail generates the first identification.
  • the electronic device displays the first identification the user can more intuitively feel how to move the electronic device according to the first identification, thereby moving the electronic device more accurately.
  • the second input may be the user's input for the thumbnail.
  • the second input is an input for the user to move the imaging frame to the position of the target object in the thumbnail.
  • the image display method may further include the following step 207.
  • Step 207 In response to the second input, the electronic device moves the camera of the electronic device according to the movement of the imaging frame.
  • the camera of the electronic device can automatically move according to the control of the electronic device, the user can trigger the electronic device to move the camera according to the movement of the imaging frame by moving the imaging frame to the position of the target object in the thumbnail. In this way, the camera can acquire the target object and obtain a preview image containing the target object in the imaging area.
  • the second input is the user's input to the target object in the thumbnail (the second input is the user's input to select the target object in the thumbnail); in conjunction with FIG. 4, as shown in FIG. 8, in step 204 Previously, the image display method provided by the embodiment of the present application may further include the following step 208.
  • Step 208 In response to the second input, the electronic device moves the camera of the electronic device according to the positional relationship between the target object and the imaging frame in the thumbnail.
  • the camera of the electronic device can automatically move according to the control of the electronic device, the user can trigger the electronic device to follow the positional relationship between the target object and the imaging frame in the thumbnail by inputting the target object in the thumbnail , The camera is automatically moved, so that the camera can obtain the target object and obtain a preview image containing the target object in the imaging area.
  • various solutions are provided for triggering the electronic device to move the imaging frame to the position of the target object in the thumbnail, so that the camera can quickly acquire the target object and obtain a preview image containing the target object in the imaging area.
  • the electronic device may display whether the contrast shooting mode (or the contrast preview mode, that is, for the The preview mode of the stored image) (while displaying the preview image, the above-mentioned thumbnail is displayed) pop-up window, the user can trigger the electronic device to enable the contrast photo mode ( Or contrast preview mode), and display the preview image interface. If the electronic device receives the user's input to increase the zoom factor (increase the image preview factor), the electronic device updates the preview image and displays the thumbnail on the preview image.
  • the contrast shooting mode or the contrast preview mode, that is, for the The preview mode of the stored image
  • the electronic device may display whether the contrast shooting mode (or the contrast preview mode, that is, for the The preview mode of the stored image) (while displaying the preview image, the above-mentioned thumbnail is displayed) pop-up window, the user can trigger the electronic device to enable the contrast photo mode ( Or contrast preview mode), and display the preview image interface. If the electronic device receives the user's input to increase the zoom factor (increase the image preview factor), the electronic
  • the image preview interface may include a control for enabling the contrast shooting mode (or comparison preview mode), and the user can enable the comparison shooting mode (or comparison preview mode) by The input of the control of the mode) triggers the electronic device to activate the contrast shooting mode (or the contrast preview mode). If the electronic device receives the user's input to increase the zoom factor, the electronic device updates the preview image and displays the thumbnail on the preview image.
  • the electronic device when the electronic device displays the image preview interface, if the electronic device receives an input from the user to increase the zoom factor (increase the image preview factor), the electronic device may display the contrast shooting mode (or Comparison preview mode); if the electronic device receives the user's input on the control that enables the comparison shooting mode (or comparison preview mode), the electronic device starts the comparison shooting mode (or comparison preview mode), and updates the preview image, and Thumbnails are displayed on the preview image.
  • the contrast shooting mode or Comparison preview mode
  • an input to increase the zoom factor can meet the user's need for zooming in and shooting, and in another case, it may be necessary to adjust the zoom factor multiple times to achieve the user's need for zooming in and shooting.
  • the input to increase the zoom factor at one time is the user's input on the preview image.
  • the input to adjust the zoom factor before the electronic device displays the thumbnail, the input to adjust the zoom factor is the user's input on the preview image; after the electronic device displays the thumbnail, the input to adjust the zoom factor can be the user's input on the preview image , It may also be the user's input on the imaging frame, which is not limited in the embodiment of the present application.
  • steps 201 to 202 can be specifically implemented through the following steps 201d to 202b.
  • Step 201d In the case of displaying the first preview image, the electronic device receives the first input of the user to increase the zoom factor.
  • Step 202b In response to the first input, the electronic device updates the first preview image to the second preview image, and displays the thumbnail on the second preview image.
  • the electronic device when the electronic device displays the first preview image, if the user's input to increase the zoom factor is received, the preview image is updated, and the thumbnail is displayed floating on the preview image.
  • the first input is an input to increase the zoom factor
  • the first input includes a first target input and a second target input
  • the second target input is the user's input to the imaging frame or the second target input is the user's preview Image input; combined with FIG. 4, as shown in FIG. 9, the above-mentioned steps 201-202 can be specifically implemented by the following steps 201e-step 202c-step 201f-step 202d.
  • Step 201e The electronic device receives the user's first target input when displaying the first preview image.
  • the first target input may be a user's two-finger input on the first preview image.
  • Step 202c In response to the first target input, the electronic device updates the first preview image to the fourth preview image, and displays the thumbnail on the fourth preview image.
  • Step 201f The electronic device receives the user's second target input.
  • the second target input may be the user's two-finger input on the imaging frame, and the second target input may also be the user's two-finger input on the fourth preview image, which is not limited in the embodiment of the present application.
  • the arrow indicated by the mark "4" and the arrow indicated by the mark "5" in the figure represent that the second target input is the user's two-finger magnification input on the preview image
  • the mark " The arrow indicated by 6" and the arrow indicated by the mark “7” represent that the second target input can also be the user's two-finger magnification input on the imaging frame.
  • Step 202d In response to the second target input, the electronic device updates the fourth preview image to the second preview image, and displays the thumbnail on the second preview image.
  • the electronic device updates the preview image in response to the second target input, keeping the thumbnail image unchanged, but the size and position of the imaging frame in the thumbnail image changes.
  • the above steps 201-202 can also be specifically implemented through the above-mentioned steps 201d-202b, or through the above-mentioned steps 201e-202c-step 201f-steps 202d is implemented, and the specific description can refer to the above description (the first input is the input that increases the image preview multiple), which will not be repeated here.
  • an input method for adjusting the zoom factor is added, which can make the user's operation more flexible.
  • each of the drawings in the embodiments of the present application is an example in combination with the drawings of the independent embodiment.
  • each of the drawings may also be implemented in combination with any other drawings that can be combined, and the embodiments of the present application are not limited.
  • the above-mentioned step 201-step 202 can be specifically implemented by the above-mentioned step 201e-step 202c-step 201f-step 202d.
  • an embodiment of the present application provides an electronic device 120.
  • the electronic device 120 includes: a receiving module 121 and a display module 122; the receiving module 121 is configured to receive a user when the first preview image is displayed. The first input; the display module 122, in response to the first input received by the receiving module 121, display a second preview image and a thumbnail corresponding to the first preview image, the second preview image is the first preview image
  • An image obtained by enlarging a partial area of, the thumbnail includes an imaging frame, and the imaging frame is used to indicate the position of the partial area in the first preview image.
  • the positional relationship between the imaging frame and the thumbnail is equivalent to the positional relationship between the partial area and the first preview image.
  • the receiving module 121 is further configured to receive the user after the second preview image and the thumbnail corresponding to the first preview image are displayed on the display module 122, if the target object is not included in the second preview image
  • the second input is used to move the imaging frame to the position where the target object in the thumbnail is located; the display module 122 is also used to respond to the second input received by the receiving module 121 to move the second input
  • the preview image is updated to the third preview image, and the third preview image includes the target object.
  • the second input is an input for the user to move the second preview image according to the position of the target object in the thumbnail; in the first preview
  • the second input is any one of the following: the user moves the input of the electronic device according to the position of the target object in the thumbnail; the user moves in the direction indicated by the first mark
  • the first identifier is generated by the electronic device according to the position of the target object in the thumbnail.
  • the electronic device 120 further includes: a generating module 123; the receiving module 121 is further configured to receive the user Before the second input, the user receives the third input of the target object in the thumbnail; the generating module 123 is configured to respond to the third input received by the receiving module 121, according to the target object in the thumbnail , Generate the first logo.
  • the receiving module 121 is specifically configured to receive the user's input to the first preview image when the first preview image is displayed.
  • the fourth input of the target object in the preview image in response to the fourth input, a first mark is displayed in the first preview image, the first mark is used to indicate the position of the target object in the first preview image;
  • the display module 122 is specifically configured to update the first preview image to the second preview image in response to the first input received by the receiving module 121, and display the thumbnail on the second preview image , And display the first mark on the second preview image and the thumbnail, wherein the thumbnail also includes a second mark, the second mark is used to indicate the position of the target object in the thumbnail, and the second mark is The first mark is generated according to the first mark, and the first mark is generated according to the position of the second mark in the thumbnail.
  • the electronic device 120 further includes: a moving module 124; the second input is that the user moves the imaging frame to the target object in the thumbnail The input of the location; the moving module 124 is used to move the camera of the electronic device according to the movement of the imaging frame before the display module 122 updates the second preview image to the third preview image; or, the second input This is the user’s input to the target object in the thumbnail; the moving module 124 is used to, before the display module 122 updates the second preview image to the third preview image, according to the target object in the thumbnail and the The positional relationship of the imaging frame moves the camera of the electronic device.
  • the second input is that the user moves the imaging frame to the target object in the thumbnail The input of the location
  • the moving module 124 is used to move the camera of the electronic device according to the movement of the imaging frame before the display module 122 updates the second preview image to the third preview image
  • the second input This is the user’s input to the target object in the thumbnail
  • the moving module 124 is used to, before the display module 122 updates the
  • the first input is an input to increase the zoom factor
  • the first input includes a first target input and a second target input
  • the second target input is a user input to the imaging frame
  • the display module 122 is specifically used for In response to the first target input, the first preview image is updated to the fourth preview image, and the thumbnail is displayed on the fourth preview image; in response to the second target input, the fourth preview image is updated to the second preview image, And the thumbnail is displayed on the second preview image.
  • the modules that must be included in the electronic device 120 are indicated by solid lines, such as the receiving module 121 and the display module 122; the modules that may or may not be included in the electronic device 120 are indicated by dashed boxes. , Such as generating module 123 and moving module 124.
  • the electronic device provided in the embodiment of the present application can implement each process shown in any one of FIG. 2 to FIG. 10 in the foregoing method embodiment. In order to avoid repetition, details are not described herein again.
  • An embodiment of the present application provides an electronic device.
  • the electronic device displays a first preview image
  • receives a first input from a user in response to the first input, it displays a second preview image and an image corresponding to the first preview image.
  • Thumbnail that is, the image content of the first preview image is the same as the image content of the thumbnail, and the size of the first preview image is proportional to the size of the thumbnail
  • the second preview image is to enlarge the partial area of the first preview image
  • the obtained image, the thumbnail includes an imaging frame, and the imaging frame is used to indicate the position of the partial area in the first preview image (that is, the image content enclosed by the imaging frame in the thumbnail is the same as the image content of the second preview image, That is, the position of the second preview image in the first preview image can be determined according to the position of the imaging frame in the thumbnail).
  • the user can determine the position of the second preview image in the first preview image according to the position of the imaging frame in the thumbnail, and the user can determine the target object according to the position of the target object that the user needs to zoom in in the thumbnail
  • the position in the first preview image so that the user can determine how to input according to the positional relationship between the imaging frame and the target object in the thumbnail, and can quickly and accurately obtain the preview image containing the target object (that is, enlarge the image of the target object), and then
  • the operation efficiency of zooming in on the target object in the preview image can be improved.
  • FIG. 12 is a schematic diagram of the hardware structure of an electronic device that implements each embodiment of the present application.
  • the electronic device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, a display unit 106, a user input unit 107, an interface unit 108, and a memory 109 , Processor 110, and power supply 111 and other components.
  • Those skilled in the art can understand that the structure of the electronic device shown in FIG. 12 does not constitute a limitation on the electronic device.
  • the electronic device may include more or less components than those shown in the figure, or a combination of certain components, or different components. Layout.
  • electronic devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, in-vehicle electronic devices, wearable devices, and pedometers.
  • the user input unit 107 is configured to receive the first input of the user when the first preview image is displayed; the display unit 106 is configured to display the second preview image and corresponding to the first preview image in response to the first input
  • the second preview image is an image obtained by enlarging a partial area of the first preview image, and the thumbnail includes an imaging frame for indicating the position of the partial area in the first preview image.
  • the electronic device receives the first input of the user when displaying the first preview image; in response to the first input, displays the second preview image and the thumbnail corresponding to the first preview image (That is, the image content of the first preview image is the same as that of the thumbnail, and the size of the first preview image is proportional to the size of the thumbnail), and the second preview image is obtained by enlarging a partial area of the first preview image Image
  • the thumbnail includes an imaging frame
  • the imaging frame is used to indicate the position of the partial area in the first preview image (that is, the image content enclosed by the imaging frame in the thumbnail is the same as that of the second preview image, that is, In other words, the position of the second preview image in the first preview image can be determined according to the position of the imaging frame in the thumbnail).
  • the user can determine the position of the second preview image in the first preview image according to the position of the imaging frame in the thumbnail, and the user can determine the target object according to the position of the target object that the user needs to zoom in in the thumbnail
  • the position in the first preview image so that the user can determine how to input according to the positional relationship between the imaging frame and the target object in the thumbnail, and can quickly and accurately obtain the preview image containing the target object (that is, enlarge the image of the target object), and then
  • the operation efficiency of zooming in on the target object in the preview image can be improved.
  • the radio frequency unit 101 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 110; in addition, Uplink data is sent to the base station.
  • the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit 101 can also communicate with the network and other devices through a wireless communication system.
  • the electronic device provides users with wireless broadband Internet access through the network module 102, such as helping users to send and receive emails, browse web pages, and access streaming media.
  • the audio output unit 103 can convert the audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output it as sound. Moreover, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic device 100 (for example, call signal reception sound, message reception sound, etc.).
  • the audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
  • the input unit 104 is used to receive audio or video signals.
  • the input unit 104 may include a graphics processing unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 is configured to monitor images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode.
  • the data is processed.
  • the processed image frame can be displayed on the display unit 106.
  • the image frames processed by the graphics processor 1041 can be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the network module 102.
  • the microphone 1042 can receive sound, and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a telephone call mode.
  • the electronic device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 1061 and the display panel 1061 when the electronic device 100 is moved to the ear. / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of electronic devices (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, percussion), etc.; sensor 105 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
  • the display unit 106 is used to display information input by the user or information provided to the user.
  • the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • the user input unit 107 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user input unit 107 includes a touch panel 1071 and other input devices 1072.
  • the touch panel 1071 also called a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1071 or near the touch panel 1071. operating).
  • the touch panel 1071 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 110, the command sent by the processor 110 is received and executed.
  • the touch panel 1071 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the user input unit 107 may also include other input devices 1072.
  • other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
  • the touch panel 1071 can be overlaid on the display panel 1061.
  • the touch panel 1071 detects a touch operation on or near it, it transmits it to the processor 110 to determine the type of the touch event, and then the processor 110 determines the type of the touch event according to the touch.
  • the type of event provides corresponding visual output on the display panel 1061.
  • the touch panel 1071 and the display panel 1061 are used as two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated
  • the implementation of the input and output functions of the electronic device is not specifically limited here.
  • the interface unit 108 is an interface for connecting an external device and the electronic device 100.
  • the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
  • the interface unit 108 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the electronic device 100 or can be used to connect to the electronic device 100 and the external device. Transfer data between devices.
  • the memory 109 can be used to store software programs and various data.
  • the memory 109 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the processor 110 is the control center of the electronic device, which uses various interfaces and lines to connect the various parts of the entire electronic device, runs or executes software programs and/or modules stored in the memory 109, and calls data stored in the memory 109 , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
  • the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
  • the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
  • the electronic device 100 may also include a power source 111 (such as a battery) for supplying power to various components.
  • a power source 111 such as a battery
  • the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
  • the electronic device 100 includes some functional modules not shown, which will not be repeated here.
  • an embodiment of the present application further provides an electronic device, which may include the aforementioned processor 110 shown in FIG. 12, a memory 109, and a computer program stored in the memory 109 and running on the processor 110,
  • the computer program is executed by the processor 110, each process of the image display method shown in any one of FIGS. 2 to 10 in the above method embodiment is realized, and the same technical effect can be achieved. In order to avoid repetition, it will not be repeated here. .
  • the embodiment of the present application also provides a computer-readable storage medium, and a computer program is stored on the computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the computer program When the computer program is executed by a processor, the computer program shown in any one of FIG. 2 to FIG. 10 in the foregoing method embodiment is implemented.
  • the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk, or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例公开了一种图像显示方法及电子设备,该方法包括:在显示第一预览图像的情况下,接收用户的第一输入;响应于第一输入,显示第二预览图像和与第一预览图像对应的缩略图,第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置。

Description

图像显示方法及电子设备
相关申请的交叉引用
本申请主张在2019年10月30日在中国提交的中国专利申请号201911046260.1的优先权,其全部内容通过引用包含于此。
技术领域
本申请实施例涉及通信技术领域,尤其涉及一种图像显示方法及电子设备。
背景技术
随着终端技术的不断发展,电子设备的图像预览功能越来越强大。例如,在图像预览的过程中,用户对放大预览图像的需求越来越高。
以用户在拍摄预览界面放大预览图像为例,目前,在被拍摄对象较远或被拍摄对象较小的情况下,用户需要通过调整图像的变焦倍数的方法来放大预览图像中的目标对象的像,从而达到使目标对象的像清晰的目的。然而,当变焦倍数太大时,预览图像中可能只包括目标对象的局部像,甚至不包括目标对象的像。这时,用户可能需要移动电子设备,以使成像区域中包括目标对象的像。但是当变焦倍数过大,或者拍摄场景中有很多与目标对象类似的物体时,用户控制成像区域包括目标对象的像的操作比较复杂。例如,用户可能需要反复移动电子设备,甚至可能需要再次缩小变焦倍数,重新定位放大位置,再重新放大预览图像,才能使成像区域中包括目标对象的像。
因此,传统技术放大预览图像中的目标对象的操作效率较低。
发明内容
本申请实施例提供一种图像显示方法及电子设备,以解决传统技术放大预览图像中的目标对象的操作效率较低的问题。
为了解决上述技术问题,本申请是这样实现的:
第一方面,本申请实施例提供了一种图像显示方法,应用于电子设备,该方法包括:在显示第一预览图像的情况下,接收用户的第一输入;响应于第一输入,显示第 二预览图像和与第一预览图像对应的缩略图,第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置。
第二方面,本申请实施例提供了一种电子设备,该电子设备包括:接收模块和显示模块;该接收模块,用于在显示第一预览图像的情况下,接收用户的第一输入;该显示模块,用于响应于该接收模块接收的第一输入,显示第二预览图像和与第一预览图像对应的缩略图,第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置。
第三方面,本申请实施例提供了一种电子设备,包括处理器、存储器及存储在该存储器上并可在该处理器上运行的计算机程序,该计算机程序被该处理器执行时实现如第一方面中的图像显示方法的步骤。
第四方面,本申请实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储计算机程序,该计算机程序被处理器执行时实现如第一方面中的图像显示方法的步骤。
在本申请实施例中,电子设备在在显示第一预览图像的情况下,接收用户的第一输入;响应于第一输入,显示第二预览图像和与第一预览图像对应的缩略图(即第一预览图像的图像内容与缩略图的图像内容相同,而第一预览图像的尺寸与缩略图的尺寸成比例),第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置(即成像框在缩略图中包围的图像内容与第二预览图像的图像内容相同,也就是说,可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置)。通过该方案,用户可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置,而且用户可以根据用户需要放大的目标对象在缩略图中的位置,确定目标对象在第一预览图像中的位置,从而用户可以根据成像框和目标对象在缩略图中的位置关系确定如何输入,可以快速准确地获得包含目标对象的预览图像(即放大目标对象的像),进而可以提高放大预览图像中的目标对象的操作效率。
附图说明
图1为本申请实施例提供的一种可能的安卓操作系统的架构示意图;
图2为本申请实施例提供的图像显示方法的流程图之一;
图3为本申请实施例提供的图像显示方法的界面的示意图之一;
图4为本申请实施例提供的图像显示方法的流程图之二;
图5为本申请实施例提供的图像显示方法的流程图之三;
图6(a)为本申请实施例提供的图像显示方法的界面的示意图之二;
图6(b)为本申请实施例提供的图像显示方法的界面的示意图之三;
图7为本申请实施例提供的图像显示方法的流程图之四;
图8为本申请实施例提供的图像显示方法的流程图之五;
图9为本申请实施例提供的图像显示方法的流程图之六;
图10为本申请实施例提供的图像显示方法的界面的示意图之四;
图11为本申请实施例提供的电子设备的结构示意图;
图12为本申请实施例提供的电子设备的硬件示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。本文中符号“/”表示关联对象是或者的关系,例如A/B表示A或者B。
本申请的说明书和权利要求书中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一输入、第二输入、第三输入和第四输入等是用于区别不同的输入,而不是用于描述输入的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如” 等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或者两个以上,例如,多个处理单元是指两个或者两个以上的处理单元;多个元件是指两个或者两个以上的元件等。
本申请实施例提供一种图像显示方法,电子设备在在显示第一预览图像的情况下,接收用户的第一输入;响应于第一输入,显示第二预览图像和与第一预览图像对应的缩略图(即第一预览图像的图像内容与缩略图的图像内容相同,而第一预览图像的尺寸与缩略图的尺寸成比例),第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置(即成像框在缩略图中包围的图像内容与第二预览图像的图像内容相同,也就是说,可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置)。通过该方案,用户可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置,而且用户可以根据用户需要放大的目标对象在缩略图中的位置,确定目标对象在第一预览图像中的位置,从而用户可以根据成像框和目标对象在缩略图中的位置关系确定如何输入,可以快速准确地获得包含目标对象的预览图像(即放大目标对象的像),进而可以提高放大预览图像中的目标对象的操作效率。
下面以安卓操作系统为例,介绍一下本申请实施例提供的图像显示方法所应用的软件环境。
如图1所示,为本申请实施例提供的一种可能的安卓操作系统的架构示意图。在图1中,安卓操作系统的架构包括4层,分别为:应用程序层、应用程序框架层、系统运行库层和内核层(具体可以为Linux内核层)。
其中,应用程序层包括安卓操作系统中的各个应用程序(包括系统应用程序和第三方应用程序)。
应用程序框架层是应用程序的框架,开发人员可以在遵守应用程序的框架的开发原则的情况下,基于应用程序框架层开发一些应用程序。
系统运行库层包括库(也称为系统库)和安卓操作系统运行环境。库主要为安卓操作系统提供其所需的各类资源。安卓操作系统运行环境用于为安卓操作系统提供软件环境。
内核层是安卓操作系统的操作系统层,属于安卓操作系统软件层次的最底层。内核层基于Linux内核为安卓操作系统提供核心系统服务和与硬件相关的驱动程序。
以安卓操作系统为例,本申请实施例中,开发人员可以基于上述如图1所示的安卓操作系统的系统架构,开发实现本申请实施例提供的图像显示方法的软件程序,从而使得该图像显示方法可以基于如图1所示的安卓操作系统运行。即处理器或者电子设备可以通过在安卓操作系统中运行该软件程序实现本申请实施例提供的图像显示方法。
本申请实施例中的电子设备可以为移动电子设备,也可以为非移动电子设备。移动电子设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等;非移动电子设备可以为个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等;本申请实施例不作具体限定。
本申请实施例提供的图像显示方法的执行主体可以为上述的电子设备(包括移动电子设备和非移动电子设备),也可以为该电子设备中能够实现该方法的功能模块和/或功能实体,具体的可以根据实际使用需求确定,本申请实施例不作限定。下面以电子设备为例,对本申请实施例提供的图像显示方法进行示例性的说明。
参考图2所示,本申请实施例提供了一种图像显示方法,该方法可以包括下述的步骤201-步骤202。
步骤201、电子设备在显示第一预览图像的情况下,接收用户的第一输入。
本申请实施例提供的图像显示方法可以应用于图像放大场景,具体的可以包括针对电子设备中存储的图像的放大场景,或者针对拍摄预览界面显示的图像的放大场景,还可以包括其他的图像放大场景,本申请实施例不作限定。
需要说明的是,本申请实施例中,电子设备中存储的图像可以包括电子设备本地(例如图库)保存的图像,也可以包括应用程序(例如购物应用程序、即时社交应用程序等)中缓存的图像,还可以包括其他的,本申请实施例不作限定。
针对电子设备中存储的图像的放大场景,第一预览图像为电子设备中保存的图像。第一输入为增大第一预览图像的放大倍数的输入。
针对拍摄预览界面显示的图像的放大场景,第一预览图像是在电子设备的摄像头对准 第一方位且变焦倍数为n时,该摄像头获取并在成像区域显示的预览图像。其中,n可以为摄像头默认的变焦倍数,也可以不是摄像头默认的变焦倍数(大于摄像头默认的变焦倍数或小于摄像头默认的变焦倍数),本申请实施例不作限定。n的取值范围可以根据实际使用需求确定,本申请实施例不作限定,例如,n为大于或等于1的数。第一输入为增大变焦倍数的输入。
可选地,第一输入可以为用户在第一预览图像(或图像预览界面)上的点击输入、第一输入也可以为用户在第一预览图像(或图像预览界面)上的滑动输入、第一输入还可以为其他的可行性输入,本申请实施例不作限定。
示例性的,上述点击输入可以为任意次数的点击输入或多指点击输入,例如,单击输入、双击输入、三击输入、双指点击输入或三指点击输入;上述滑动输入可以为向任意方向上的滑动输入或多指滑动输入,例如,向上的滑动输入、向下的滑动输入、向左的滑动输入、向右的滑动输入、双指滑动输入或三指滑动输入。
步骤202、电子设备响应于第一输入,显示第二预览图像和与第一预览图像对应的缩略图。
第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置。
可以理解,缩略图中的图像内容与第一预览图像中的图像内容相同,成像框在缩略图中包围的图像内容与第二预览图像的图像内容相同。
可以理解,在摄像头对准第一方位的情况下,在用户将变焦倍数调大的过程中,电子设备的成像区域显示的图像内容的尺寸放大,显示的图像内容的范围缩小;在用户将变焦倍数调小的过程中,电子设备的成像区域显示的图像内容的尺寸缩小,显示的图像内容的范围放大。通常启动电子设备的拍照功能时,变焦倍数默认为1(通常也是最小的变焦倍数,用1x表示),用户可以通过在预览界面的双指输入,调节变焦倍数(增大变焦倍数或减小变焦倍数)。
本申请实施例中,第二预览图像的图像(图像内容)和缩略图的图像(图像内容)不相同。第二预览图像为变焦倍数大于n时,成像区域显示的图像。
该摄像头对准第一方位且变焦倍数为n时,成像区域显示的图像(第一预览图像)与 缩略图成比例,本申请实施例中,可以将该比例记为T1。
可选地,该成像框与该缩略图之间的位置关系,等同于该局部区域与第一预览图像之间的位置关系。
可以理解,该成像框与该缩略图之间的位置关系可以用于指示该局部区域与第一预览图像之间的位置关系。若第二预览图像的图像内容(由于用户的输入,摄像头的移动等)发生变化,成像框与缩略图之间的位置关系始终随着第二预览图像(局部区域)与第一预览图像之间的位置关系的变化而变化,二者始终互相对应。
进一步可以理解,该成像框内的图像与第二预览图像的图像相同,具体指该成像框在缩略图中包围的图像内容与第二预览图像的图像内容相同,也可以说,成像框在缩略图中包围的图像内容的范围与成像区域的成像范围相同,但尺寸成比例(比例为T1)。本申请实施例中,无论用户如何调整变焦倍数,成像框在缩略图中包围的图像内容始终与成像区域显示的预览图像的图像内容相同,这样可以直观的给用户呈现出手机成像区域所成的像在缩略图中的位置。在变焦倍数为n时,成像框和缩略图的大小相同;在用户将变焦倍数调大的过程中,缩略图(始终为该摄像头对准第一方位且变焦倍数为n时获得的图像)不变,成像框缩小,成像区域显示的预览图像的图像内容的范围缩小(图像内容的尺寸放大)。
可以理解,如果用户移动电子设备来改变电子设备的位置或者如果电子设备的摄像头根据用户的输入可以自动移动(例如伸缩式摄像头,在处于伸出电子设备的外壳的状态下,摄像头可以移动,本申请实施例中,移动包括转动、平移等运动状态)的话,电子设备的成像区域所成的像会随着发生改变,缩略图内成像框的位置也会随着发生改变。
示例性的,如图3所示,标记“1”指示的为第二预览图像,其中,标记“7x”表示第二预览图像的变焦倍数为7,标记“2”指示的为缩略图,标记“3”指示的为成像框。
需要说明的是,本申请实施例中,主要以针对拍摄预览界面显示的图像的放大场景进行描述,针对电子设备中存储的图像的放大场景的描述可以参考针对拍摄预览界面显示的图像的放大场景相关描述,此处不再赘述。
本申请实施例提供了一种图像显示方法,电子设备在在显示第一预览图像的情况下,接收用户的第一输入;响应于第一输入,显示第二预览图像和与第一预览图像对应的缩略图(即第一预览图像的图像内容与缩略图的图像内容相同,而第一预览图像的尺寸与缩略 图的尺寸成比例),第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置(即成像框在缩略图中包围的图像内容与第二预览图像的图像内容相同,也就是说,可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置)。通过该方案,用户可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置,而且用户可以根据用户需要放大的目标对象在缩略图中的位置,确定目标对象在第一预览图像中的位置,从而用户可以根据成像框和目标对象在缩略图中的位置关系确定如何输入,可以快速准确地获得包含目标对象的预览图像(即放大目标对象的像),进而可以提高放大预览图像中的目标对象的操作效率。
可选地,本申请实施例中,在第二预览图像中未包括目标对象的情况下,用户可以通过输入触发电子设备显示包括目标对象的预览图像。
示例性的,结合图2,如图4所示,在上述步骤202之后,本申请实施例提供的图像显示方法还可以包括下述的步骤203-步骤204。
步骤203、在第二预览图像中未包括目标对象的情况下,电子设备接收用户的第二输入。
第二输入用于将该成像框移动至该缩略图中的目标对象所在的位置。
第二输入用于触发该电子设备将该成像框移动至该缩略图中的该目标对象所在的位置。
第二预览图像中可能包括全部的目标对象,也可能包括部分的目标对象,也可能完全不包括目标对象。本申请实施例中,第二预览图像中包括部分的目标对象,以及第二预览图像中完全不包括目标对象的情况,均记为第二预览图像中未包括目标对象。
本申请实施例中,第二预览图像中未包括目标对象,也就意味着缩略图中的目标对象未位于成像框内。
由于用户可以通过缩略图直观感受目标对象的位置,因此用户可以根据缩略图操作电子设备来改变电子设备的成像区域所成的像,或者用户可以直接通过操作缩略图来改变电子设备的成像区域所成的像。
针对电子设备中存储的图像的放大场景,第二输入可以为用户根据该目标对象在该缩 略图中的位置,移动图像预览界面显示的第二预览图像的输入。
针对拍摄预览界面显示的图像的放大场景,在电子设备的摄像头不能自动移动的情况下,第二输入可以为用户根据该目标对象在该缩略图中的位置关系(或缩略图中的该目标对象和该成像框的位置关系),移动电子设备的输入(用户根据缩略图,通过移动电子设备来改变电子设备的成像区域所成的像);在电子设备的摄像头可以自动移动的情况下,第二输入可以为用户将该成像框移动至该缩略图中的该目标对象所在的位置的输入,然后电子设备根据成像框的移动,控制摄像头移动,或者,第二输入也可以为用户在缩略图中对目标对象的输入(即在缩略图中选择目标对象的输入,该输入可以为用户对缩略图中的目标对象的点击输入、滑动输入等)(用户通过操作缩略图来改变电子设备的成像区域所成的像),然后电子设备根据成像框和缩略图中目标对象的位置关系,控制摄像头移动。
示例性的,对上述点击输入和滑动输入的描述可以参考上述步骤201中对第一输入的描述中对点击输入和滑动输入的相关描述,此处不再赘述。
本申请实施例中,第二预览图像中未包括目标对象,而用户需要拍摄包括目标对象的图像(或者放大存储的图像中的目标对象,以下简称放大目标对象),因此用户可以通过第二输入,触发电子设备使成像区域显示的预览图像中包括目标对象。
步骤204、电子设备响应于第二输入,将第三预览图像更新为第三预览图像。
第三预览图像中包括该目标对象。
电子设备响应于第二输入,将第二预览图像更新为包括目标对象的第三预览图像,用户可以通过拍摄输入,触发电子设备根据第三预览图像拍摄得到包括目标对象的图像。
需要说明的是,本申请实施例中,在拍摄过程中(或者放大目标对象的过程中),缩略图作为用户调整成像区域所成的像的参考,可以一直悬浮显示在预览图像上;当拍摄(或者放大目标对象)完成后,电子设备无需保存缩略图,可自动删除该缩略图。
可选地,在第一预览图像为该电子设备中存储的图像的情况下,第二输入为用户根据该目标对象在该缩略图中的位置,移动第二预览图像的输入。第二输入也可以为用户按照第一标识指示的方向移动预览图像的输入,第一标识为该电子设备根据该目标对象在该缩略图中的位置生成的。针对第一标识的生成的描述可以参考下述对第一预览图像为拍摄预览界面显示的图像的情况下,对第一标识的生成的相关描述,此处不再赘述。
可选地,在第一预览图像为拍摄预览界面显示的图像的情况下,第二输入为以下任意一项:用户根据该目标对象在该缩略图中的位置,移动该电子设备的输入;用户按照第一标识指示的方向移动该电子设备的输入,第一标识为该电子设备根据该目标对象在该缩略图中的位置生成的。
一种情况,第二输入为用户根据该目标对象在该缩略图中的位置,移动该电子设备的输入。可以理解,用户通过根据该目标对象在该缩略图中的位置(也可以是缩略图中的该目标对象和该成像框的位置关系),移动电子设备的输入,触发电子设备将该成像框移动至该缩略图中的该目标对象所在的位置。
另一种情况,第二输入为用户按照第一标识指示的方向移动该电子设备的输入,可以理解,用户通过按照第一标识指示的方向移动该电子设备的输入,触发电子设备将该成像框移动至该缩略图中的该目标对象所在的位置,第一标识为该电子设备根据该目标对象在该缩略图中的位置(或缩略图中的该目标对象和该成像框的位置关系)生成的。
可选地,在第二输入为用户按照第一标识指示的方向移动该电子设备的输入的情况下,在接收用户的第二输入之前,电子设备需要先根据用户的输入生成第一标识。一种可能的情况,用户需要先通过输入触发电子设备在该缩略图中确定目标对象,然后根据缩略图中成像框和目标对象的位置关系生成第一标识;另一种可能的情况,用户需要先通过输入在第一预览图像中确定目标对象,然后在生成缩略图时,在缩略图中确定目标对象,最后根据缩略图中成像框和目标对象的位置关系生成第一标识。
示例性的,结合图4,如图5所示,在步骤203之前,本申请实施例提供的图像显示方法还可以包括下述的步骤205-步骤206。
步骤205、电子设备接收用户在该缩略图中对该目标对象的第三输入。
可选地,第三输入可以为用户在该缩略图中对该目标对象的点击输入,也可以为用户在该缩略图中对该目标对象的滑动输入,还可以为用户在该缩略图中对该目标对象的其他可行性输入,本申请实施例不作限定。
示例性的,对上述点击输入和滑动输入的相关描述可以参考上述步骤201中对第一输入的描述中对点击输入和滑动输入的相关描述,此处不再赘述。
步骤206、电子设备响应于第三输入,根据该目标对象在该缩略图中的位置,生成第 一标识。
可以理解,电子设备响应于第三输入,获得缩略图中的目标对象,然后根据该目标对象在该缩略图中的位置(或缩略图中的该目标对象和该成像框的位置关系),生成第一标识。
可选地,电子设备可以根据缩略图中的目标对象(记为A)所在的位置(坐标),计算出实景中的目标对象超出成像区域所成的像(记为A')所在的位置(坐标)。具体可以通过下述公式得到:A'的坐标=A的坐标*T1*电子设备的变焦倍数。其中,对T1的描述可以参考上述步骤202中对T1的相关描述,此处不再赘述。电子设备可以根据A'的坐标生成第一标识。
可选地,电子设备可以根据A和该成像框的位置关系,计算得到A'和成像区域的位置关系,继而得到A'的坐标,从而电子设备可以根据A'的坐标生成第一标识。具体的本申请实施例不予赘述。
示例性的,第一标识可以为成像区域的中心点到A'的连线,第一标识也可以是A到A'的连线,还可以是其他的连线,本申请实施例不作限定。第一标识可以为实线,也可以为虚线,可以是带箭头的射线,也可以是不带箭头的线段,还可以是其他的,本申请实施例不作限定。
示例性的,如图6(a)所示,第一标记为成像区域的中心点到A'的连线(图中用实射线标示)。或者,如图6(b)所示,第一标记为A到A'的连线(图中用实线段标示)。
本申请实施例中,用户通过对缩略图中的目标对象的第三输入,触发电子设备生成第一标识,在电子设备显示第一标识的情况下,用户根据第一标识可以更直观的感受如何移动电子设备,从而更精准的移动电子设备。
示例性的,上述步骤201具体可以通过下述步骤201a-步骤201c实现,上述步骤202具体可以通过下述步骤202a实现。
步骤201a、在显示第一预览图像的情况下,电子设备接收用户对第一预览图像中的该目标对象的第四输入。
第四输入为用户在第一预览图像中选择目标对象的输入。可选地,第四输入可以为用户在第一预览图像中对目标对象的点击输入,也可以为用户在第一预览图像中对目标对象 的滑动输入,还可以为其他的可行性输入,本申请实施例不作限定。
示例性的,对上述点击输入和滑动输入的相关描述可以参考上述步骤201中对第一输入的描述中对点击输入和滑动输入的相关描述,此处不再赘述。
步骤201b、电子设备响应于第四输入,在第一预览图像中显示第一标记。
第一标记用于指示该目标对象在第一预览图像中的位置。
可以理解,电子设备响应于第四输入,在第一预览图像中的该目标对象所在的位置显示第一标记。第一标记可以根据实际使用需求确定,本申请实施例不作限定。示例性的,第一标记为A'。
步骤201c、电子设备接收用户增大变焦倍数的第一输入。
可选地,第一输入可以为用户在第一预览图像上的双指输入,也可以为用户在调节变焦倍数的控件上的点击输入或滑动输入,还可以为其他的可行性输入,本申请实施例不作限定。
示例性的,上述双指输入可以是双指点击输入、双指滑动输入等;上述点击输入可以为任意次数的点击输入,如单击输入、双击输入或三击输入等;上述滑动输入可以为向任意方向的滑动输入,如向上的滑动输入、向下的滑动输入、向左的滑动输入或向右的滑动输入等。
步骤202a、电子设备响应于第一输入,将第一预览图像更新为第二预览图像,在第二预览图像上显示该缩略图,以及在第二预览图像和该缩略图上显示第一标识。
其中,该缩略图中还包括第二标记,第二标记用于指示该目标对象在该缩略图中的位置(即该缩略图中的该目标图像所在的位置上显示有第二标记),第二标记是根据第一标记生成的,第一标识为根据第二标记在该缩略图中的位置(或缩略图中的该目标对象和该成像框的位置关系)生成的。
第二标记用于指示目标对象在缩略图中的位置。第二标记可以根据实际使用需求确定,本申请实施例不作限定。示例性的,第二标记为A。电子设备可以根据A'(第一标记)的坐标计算得到A(第二标记)的坐标,计算公式可以为:A的坐标=A'的坐标/电子设备的变焦倍数/T1。其中,对T1的描述可以参考上述步骤202中对T1的相关描述,此处不再赘述。
电子设备也可以根据A与成像框的位置关系得到A'与成像区域的位置关系。
对第一标识的描述可以参考上述步骤206中对第一标识的相关描述,此处不再赘述。
本申请实施例中,用户在放大变焦倍数之前,先通过第四输入触发电子设备在第一预览图像上确定目标对象,然后在通过增大变焦倍数的第一输入,触发电子设备更新预览图像(将第一预览图像更新为第二预览图像),并显示缩略图,并根据在第一预览图像中确定的目标对象,在缩略图中确定目标对象(显示第二标记),然后根据目标对象在缩略图中的位置生成第一标识。在电子设备显示第一标识的情况下,用户根据第一标识可以更直观的感受如何移动电子设备,从而更精准的移动电子设备。
可选地,在第一预览图像为拍摄预览界面显示的图像的情况下,若电子设备的摄像头可以自动移动,则第二输入可以为用户针对缩略图的输入。
示例性的,第二输入为用户将该成像框移动至该缩略图中的该目标对象所在的位置的输入,结合图4,如图7所示,在步骤204之前,本申请实施例提供的图像显示方法还可以包括下述的步骤207。
步骤207、电子设备响应于第二输入,根据该成像框的移动,移动该电子设备的摄像头。
可以理解,由于电子设备的摄像头可以根据电子设备的控制自动移动,因此,用户可以通过将成像框移动至缩略图中的目标对象所在的位置的输入,触发电子设备根据成像框的移动移动该摄像头,从而可以使摄像头获取到目标对象,并在成像区域得到包含目标对象的预览图像。
示例性的,第二输入为用户在该缩略图中对该目标对象的输入(第二输入为用户在缩略图中选择目标对象的输入);结合图4,如图8所示,在步骤204之前,本申请实施例提供的图像显示方法还可以包括下述的步骤208。
步骤208、电子设备响应于第二输入,根据该缩略图中的该目标对象和该成像框的位置关系,移动该电子设备的摄像头。
可以理解,由于电子设备的摄像头可以根据电子设备的控制自动移动,因此,用户可以通过在该缩略图中对该目标对象的输入,触发电子设备根据缩略图中的目标对象和成像框的位置关系,自动移动该摄像头,从而可以使摄像头获取到目标对象,并在成像区域得 到包含目标对象的预览图像。
本申请实施例中,提供了多种触发电子设备将成像框移动至缩略图中的目标对象所在位置的方案,进而可以使摄像头快速获取到目标对象,并在成像区域得到包含目标对象的预览图像。
可选地,本申请实施例中,在用户通过输入触发电子设备启动相机应用(或针对存储的图像启动图像预览功能)时,电子设备可以显示是否启用对比拍摄模式(或对比预览模式,即针对存储的图像的预览模式)(在显示预览图像的同时,显示上述的缩略图)的弹窗,用户可以通过选择启用对比拍照模式(或对比预览模式)的输入,触发电子设备启用对比拍照模式(或对比预览模式),并显示预览图像界面。若电子设备接收到用户增大变焦倍数(增大图像预览倍数)的输入,则电子设备更新预览图像,以及在预览图像上显示缩略图。
可选地,本申请实施例中,在电子设备显示图像预览界面时,图像预览界面上可以包括启用对比拍摄模式(或对比预览模式)的控件,用户可以通过对启用对比拍摄模式(或对比预览模式)的控件的输入触发电子设备启用对比拍摄模式(或对比预览模式)。若电子设备接收到用户增大变焦倍数的输入,则电子设备更新预览图像,以及在预览图像上显示缩略图。
可选地,本申请实施例中,在电子设备显示图像预览界面时,若电子设备接收到用户增大变焦倍数(增大图像预览倍数)的输入,则电子设备可以显示启用对比拍摄模式(或对比预览模式)的控件;若电子设备接收到用户在启用对比拍摄模式(或对比预览模式)的控件上的输入,则电子设备启动对比拍摄模式(或对比预览模式),并更新预览图像,以及在预览图像上显示缩略图。
可选地,一种情况下,一次增大变焦倍数的输入就可以达到用户放大拍摄的需求,另一种情况下,可能需要多次调节变焦倍数的输入才可以达到用户放大拍摄的需求。针对前一种情况,该一次增大变焦倍数的输入为用户在预览图像上的输入。针对后一种情况,在电子设备显示缩略图之前,调节变焦倍数的输入为用户在预览图像上的输入;在电子设备显示缩略图之后,调节变焦倍数的输入可以为用户在预览图像上的输入,也可以为用户在成像框上的输入,本申请实施例不作限定。
示例性的,上述步骤201-步骤202具体的可以通过下述的步骤201d-步骤202b实现。
步骤201d、在显示第一预览图像的情况下,电子设备接收用户增大变焦倍数的第一输入。
对第一输入的描述可以参考上述步骤201c中对第一输入的相关描述,此处不再赘述。
步骤202b、电子设备响应于第一输入,将第一预览图像更新为第二预览图像,并在第二预览图像上显示该缩略图。
本申请实施例中,在电子设备显示第一预览图像的情况下,若接收到用户增大变焦倍数的输入,则更新预览图像,并在预览图像上悬浮显示缩略图。
示例性的,第一输入为增大变焦倍数的输入,第一输入包括第一目标输入和第二目标输入,第二目标输入为用户对该成像框的输入或第二目标输入为用户对预览图像的输入;结合图4,如图9所示,上述步骤201-步骤202具体的可以通过下述的步骤201e-步骤202c-步骤201f-步骤202d实现。
步骤201e、电子设备在显示第一预览图像的情况下,接收用户的第一目标输入。
第一目标输入可以为用户在第一预览图像上的双指输入。
对上述双指输入的描述具体可以参考上述步骤201c中对第一输入的描述中对双指输入的相关描述,此处不再赘述。
步骤202c、电子设备响应于第一目标输入,将第一预览图像更新为第四预览图像,并在第四预览图像上显示该缩略图。
步骤201f、电子设备接收用户的第二目标输入。
第二目标输入可以为用户在成像框上的双指输入,第二目标输入也可以为用户在第四预览图像上的双指输入,本申请实施例不作限定。
示例性的,如图10所示,图中用标记“4”指示的箭头和标记“5”指示的箭头代表第二目标输入为用户在预览图像上的双指放大输入,图中用标记“6”指示的箭头和标记“7”指示的箭头代表第二目标输入也可以为用户在成像框上的双指放大输入。
步骤202d、电子设备响应于第二目标输入,将第四预览图像更新为第二预览图像,并在第二预览图像上显示该缩略图。
可以理解,本申请实施例中,电子设备响应与第二目标输入,更新预览图像,保持缩 略图不变,但缩略图中的成像框的尺寸位置改变。
需要说明的是,针对电子设备中存储的图像的放大场景,上述步骤201-步骤202具体的也可以通过上述的步骤201d-步骤202b实现,或者通过上述的步骤201e-步骤202c-步骤201f-步骤202d实现,具体描述可以参考上述描述(第一输入则为增大图像预览倍数的输入),此处不再赘述。
本申请实施例中,增加了调节变焦倍数的输入方式,可以使用户的操作更灵活。
本申请实施例中的各个附图均是结合独权实施例附图示例的,具体实现时,各个附图还可以结合其它任意可以结合的附图实现,本申请实施例不作限定。例如,结合图2、图7或图8所示,上述步骤201-步骤202具体的可以通过上述的步骤201e-步骤202c-步骤201f-步骤202d实现。
如图11所示,本申请实施例提供一种电子设备120,该电子设备120包括:接收模块121和显示模块122;该接收模块121,用于在显示第一预览图像的情况下,接收用户的第一输入;该显示模块122,用于响应于该接收模块121接收的第一输入,显示第二预览图像和与第一预览图像对应的缩略图,第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置。
可选地,该成像框与该缩略图之间的位置关系,等同于该局部区域与第一预览图像之间的位置关系。
可选地,该接收模块121,还用于在该显示模块122显示第二预览图像和与第一预览图像对应的缩略图之后,在第二预览图像中未包括目标对象的情况下,接收用户的第二输入,第二输入用于将该成像框移动至该缩略图中的目标对象所在的位置;该显示模块122,还用于响应于该接收模块121接收的第二输入,将第二预览图像更新为第三预览图像,第三预览图像中包括该目标对象。
可选地,在第一预览图像为该电子设备中存储的图像的情况下,第二输入为用户根据该目标对象在该缩略图中的位置,移动第二预览图像的输入;在第一预览图像为拍摄预览界面显示的图像的情况下,第二输入为以下任意一项:用户根据该目标对象在该缩略图中的位置,移动该电子设备的输入;用户按照第一标识指示的方向移动该电子设备的输入, 第一标识为该电子设备根据该目标对象在该缩略图中的位置生成的。
可选地,在第二输入为用户按照第一标识指示的方向移动该电子设备的输入的情况下,该电子设备120还包括:生成模块123;该接收模块121,还用于在该接收用户的第二输入之前,接收用户在该缩略图中对该目标对象的第三输入;该生成模块123,用于响应于该接收模块121接收的第三输入,根据该目标对象在该缩略图中的位置,生成第一标识。
可选地,在第二输入为用户按照第一标识指示的方向移动该电子设备的输入的情况下,该接收模块121,具体用于在显示第一预览图像的情况下,接收用户对第一预览图像中的该目标对象的第四输入;响应于第四输入,在第一预览图像中显示第一标记,第一标记用于指示该目标对象在第一预览图像中的位置;接收用户增大变焦倍数的第一输入;该显示模块122,具体用于响应于该接收模块121接收的第一输入,将第一预览图像更新为第二预览图像,在第二预览图像上显示该缩略图,以及在第二预览图像和该缩略图上显示第一标识,其中,该缩略图中还包括第二标记,第二标记用于指示该目标对象在该缩略图中的位置,第二标记是根据第一标记生成的,第一标识为根据第二标记在该缩略图中的位置生成的。
可选地,在第一预览图像为拍摄预览界面显示的图像的情况下,该电子设备120还包括:移动模块124;第二输入为用户将该成像框移动至该缩略图中的该目标对象所在的位置的输入;该移动模块124,用于在该显示模块122将第二预览图像更新为第三预览图像之前,根据该成像框的移动,移动该电子设备的摄像头;或者,第二输入为用户在该缩略图中对该目标对象的输入;该移动模块124,用于在该显示模块122将第二预览图像更新为第三预览图像之前,根据该缩略图中的该目标对象和该成像框的位置关系,移动该电子设备的摄像头。
可选地,第一输入为增大变焦倍数的输入,第一输入包括第一目标输入和第二目标输入,第二目标输入为用户对该成像框的输入;该显示模块122,具体用于响应于第一目标输入,将第一预览图像更新为第四预览图像,并在第四预览图像上显示该缩略图;响应于第二目标输入,将第四预览图像更新为第二预览图像,并在第二预览图像上显示该缩略图。
需要说明的是,如图11所示,电子设备120中一定包括的模块用实线框示意,如接 收模块121和显示模块122;电子设备120中可以包括也可以不包括的模块用虚线框示意,如生成模块123和移动模块124。
本申请实施例提供的电子设备能够实现上述方法实施例中图2至图10任意之一所示的各个过程,为避免重复,此处不再赘述。
本申请实施例提供了一种电子设备,电子设备在在显示第一预览图像的情况下,接收用户的第一输入;响应于第一输入,显示第二预览图像和与第一预览图像对应的缩略图(即第一预览图像的图像内容与缩略图的图像内容相同,而第一预览图像的尺寸与缩略图的尺寸成比例),第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置(即成像框在缩略图中包围的图像内容与第二预览图像的图像内容相同,也就是说,可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置)。通过该方案,用户可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置,而且用户可以根据用户需要放大的目标对象在缩略图中的位置,确定目标对象在第一预览图像中的位置,从而用户可以根据成像框和目标对象在缩略图中的位置关系确定如何输入,可以快速准确地获得包含目标对象的预览图像(即放大目标对象的像),进而可以提高放大预览图像中的目标对象的操作效率。
图12为实现本申请各个实施例的一种电子设备的硬件结构示意图。如图12所示,该电子设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图12中示出的电子设备结构并不构成对电子设备的限定,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本申请实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载电子设备、可穿戴设备、以及计步器等。
其中,用户输入单元107,用于在显示第一预览图像的情况下,接收用户的第一输入;显示单元106,用于响应于第一输入,显示第二预览图像和与第一预览图像对应的缩略图,第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置。
本申请实施例提供的电子设备,电子设备在在显示第一预览图像的情况下,接收用户的第一输入;响应于第一输入,显示第二预览图像和与第一预览图像对应的缩略图(即第一预览图像的图像内容与缩略图的图像内容相同,而第一预览图像的尺寸与缩略图的尺寸成比例),第二预览图像为对第一预览图像的局部区域进行放大得到的图像,该缩略图包括成像框,该成像框用于指示该局部区域在第一预览图像中的位置(即成像框在缩略图中包围的图像内容与第二预览图像的图像内容相同,也就是说,可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置)。通过该方案,用户可以根据成像框在缩略图中的位置,确定第二预览图像在第一预览图像中的位置,而且用户可以根据用户需要放大的目标对象在缩略图中的位置,确定目标对象在第一预览图像中的位置,从而用户可以根据成像框和目标对象在缩略图中的位置关系确定如何输入,可以快速准确地获得包含目标对象的预览图像(即放大目标对象的像),进而可以提高放大预览图像中的目标对象的操作效率。
应理解的是,本申请实施例中,射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信系统与网络和其他设备通信。
电子设备通过网络模块102为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元103可以将射频单元101或网络模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与电子设备100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103包括扬声器、蜂鸣器以及受话器等。
输入单元104用于接收音频或视频信号。输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在 存储器109(或其它存储介质)中或者经由射频单元101或网络模块102进行发送。麦克风1042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。
电子设备100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在电子设备100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器105还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作)。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。具体地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板1071可覆盖在显示面板1061上,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据 触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图12中,触控面板1071与显示面板1061是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现电子设备的输入和输出功能,具体此处不做限定。
接口单元108为外部装置与电子设备100连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备100内的一个或多个元件或者可以用于在电子设备100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。处理器110可包括一个或多个处理单元;可选地,处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
电子设备100还可以包括给各个部件供电的电源111(比如电池),可选地,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,电子设备100包括一些未示出的功能模块,在此不再赘述。
可选地,本申请实施例还提供一种电子设备,可以包括上述如图12所示的处理器110,存储器109,以及存储在存储器109上并可在该处理器110上运行的计算机程序,该计算 机程序被处理器110执行时实现上述方法实施例中图2至图10任意之一所示的图像显示方法的各个过程,且能达到相同的技术效果,为避免重复,此处不再赘述。
本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述方法实施例中图2至图10任意之一所示的图像显示方法的各个过程,且能达到相同的技术效果,为避免重复,此处不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台电子设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (18)

  1. 一种图像显示方法,所述方法包括:
    电子设备在显示第一预览图像的情况下,接收用户的第一输入;
    所述电子设备响应于所述第一输入,显示第二预览图像和与所述第一预览图像对应的缩略图,所述第二预览图像为对所述第一预览图像的局部区域进行放大得到的图像,所述缩略图包括成像框,所述成像框用于指示所述局部区域在所述第一预览图像中的位置。
  2. 根据权利要求1所述的方法,其中,所述成像框与所述缩略图之间的位置关系,等同于所述局部区域与所述第一预览图像之间的位置关系。
  3. 根据权利要求1所述的方法,其中,所述显示第二预览图像和与所述第一预览图像对应的缩略图之后,还包括:
    所述电子设备在所述第二预览图像中未包括目标对象的情况下,接收用户的第二输入,所述第二输入用于将所述成像框移动至所述缩略图中的目标对象所在的位置;
    所述电子设备响应于所述第二输入,将所述第二预览图像更新为第三预览图像,所述第三预览图像中包括所述目标对象。
  4. 根据权利要求3所述的方法,其中,在所述第一预览图像为所述电子设备中存储的图像的情况下,所述第二输入为用户根据所述目标对象在所述缩略图中的位置,移动所述第二预览图像的输入;
    在所述第一预览图像为拍摄预览界面显示的图像的情况下,所述第二输入为以下任意一项:用户根据所述目标对象在所述缩略图中的位置,移动所述电子设备的输入;用户按照第一标识指示的方向移动所述电子设备的输入,所述第一标识为所述电子设备根据所述目标对象在所述缩略图中的位置生成的。
  5. 根据权利要求4所述的方法,其中,在所述第二输入为用户按照第一标识指示的方向移动所述电子设备的输入的情况下,所述接收用户的第二输入之前,还包括:
    所述电子设备接收用户在所述缩略图中对所述目标对象的第三输入;
    所述电子设备响应于所述第三输入,根据所述目标对象在所述缩略图中的位置,生成所述第一标识。
  6. 根据权利要求4所述的方法,其中,在所述第二输入为用户按照第一标识指示的方向移动所述电子设备的输入的情况下,所述电子设备在显示第一预览图像的情况下,接收第一输入,包括:
    所述电子设备在显示第一预览图像的情况下,接收用户对所述第一预览图像中的所述目标对象的第四输入;
    所述电子设备响应于所述第四输入,在所述第一预览图像中显示第一标记,所述第一标记用于指示所述目标对象在所述第一预览图像中的位置;
    所述电子设备接收用户增大变焦倍数的所述第一输入;
    所述电子设备响应于所述第一输入,显示第二预览图像和与所述第一预览图像对应的缩略图,包括:
    所述电子设备响应于所述第一输入,将所述第一预览图像更新为所述第二预览图像,在所述第二预览图像上显示所述缩略图,以及在所述第二预览图像和所述缩略图上显示所述第一标识,其中,所述缩略图中还包括第二标记,所述第二标记用于指示所述目标对象在所述缩略图中的位置,所述第二标记是根据所述第一标记生成的,所述第一标识为根据所述第二标记在所述缩略图中的位置生成的。
  7. 根据权利要求3所述的方法,其中,在所述第一预览图像为拍摄预览界面显示的图像的情况下,
    所述第二输入为用户将所述成像框移动至所述缩略图中的所述目标对象所在的位置的输入;所述将所述第二预览图像更新为第三预览图像之前,还包括:
    所述电子设备根据所述成像框的移动,移动所述电子设备的摄像头;
    或者,
    所述第二输入为用户在所述缩略图中对所述目标对象的输入;所述将所述第二预览图像更新为第三预览图像之前,还包括:
    所述电子设备根据所述缩略图中的所述目标对象和所述成像框的位置关系,移动所述电子设备的摄像头。
  8. 根据权利要求3至5和7中任一项所述的方法,其中,所述第一输入为增大变焦倍数的输入,所述第一输入包括第一目标输入和第二目标输入,所述第二目标输入 为用户对所述成像框的输入;
    所述电子设备响应于所述第一输入,显示第二预览图像和与所述第一预览图像对应的缩略图,包括:
    所述电子设备响应于所述第一目标输入,将所述第一预览图像更新为第四预览图像,并在所述第四预览图像上显示所述缩略图;
    所述电子设备响应于所述第二目标输入,将所述第四预览图像更新为所述第二预览图像,并在所述第二预览图像上显示所述缩略图。
  9. 一种电子设备,所述电子设备包括:接收模块和显示模块;
    所述接收模块,用于在显示第一预览图像的情况下,接收用户的第一输入;
    所述显示模块,用于响应于所述接收模块接收的所述第一输入,显示第二预览图像和与所述第一预览图像对应的缩略图,所述第二预览图像为对所述第一预览图像的局部区域进行放大得到的图像,所述缩略图包括成像框,所述成像框用于指示所述局部区域在所述第一预览图像中的位置。
  10. 根据权利要求9所述的电子设备,其中,所述成像框与所述缩略图之间的位置关系,等同于所述局部区域与所述第一预览图像之间的位置关系。
  11. 根据权利要求9所述的电子设备,其中,所述接收模块,还用于在所述显示模块显示第二预览图像和与所述第一预览图像对应的缩略图之后,在所述第二预览图像中未包括目标对象的情况下,接收用户的第二输入,所述第二输入用于将所述成像框移动至所述缩略图中的目标对象所在的位置;
    所述显示模块,还用于响应于所述接收模块接收的所述第二输入,将所述第二预览图像更新为第三预览图像,所述第三预览图像中包括所述目标对象。
  12. 根据权利要求11所述的电子设备,其中,在所述第一预览图像为所述电子设备中存储的图像的情况下,所述第二输入为用户根据所述目标对象在所述缩略图中的位置,移动所述第二预览图像的输入;
    在所述第一预览图像为拍摄预览界面显示的图像的情况下,所述第二输入为以下任意一项:用户根据所述目标对象在所述缩略图中的位置,移动所述电子设备的输入;用户按照第一标识指示的方向移动所述电子设备的输入,所述第一标识为所述电子设 备根据所述目标对象在所述缩略图中的位置生成的。
  13. 根据权利要求12所述的电子设备,其中,在所述第二输入为用户按照第一标识指示的方向移动所述电子设备的输入的情况下,所述电子设备还包括:生成模块;
    所述接收模块,还用于在所述接收用户的第二输入之前,接收用户在所述缩略图中对所述目标对象的第三输入;
    所述生成模块,用于响应于所述接收模块接收的所述第三输入,根据所述目标对象在所述缩略图中的位置,生成所述第一标识。
  14. 根据权利要求12所述的电子设备,其中,在所述第二输入为用户按照第一标识指示的方向移动所述电子设备的输入的情况下,所述接收模块,具体用于在显示第一预览图像的情况下,接收用户对所述第一预览图像中的所述目标对象的第四输入;响应于所述第四输入,在所述第一预览图像中显示第一标记,所述第一标记用于指示所述目标对象在所述第一预览图像中的位置;接收用户增大变焦倍数的所述第一输入;
    所述显示模块,具体用于响应于所述接收模块接收的所述第一输入,将所述第一预览图像更新为所述第二预览图像,在所述第二预览图像上显示所述缩略图,以及在所述第二预览图像和所述缩略图上显示所述第一标识,其中,所述缩略图中还包括第二标记,所述第二标记用于指示所述目标对象在所述缩略图中的位置,所述第二标记是根据所述第一标记生成的,所述第一标识为根据所述第二标记在所述缩略图中的位置生成的。
  15. 根据权利要求11所述的电子设备,其中,在所述第一预览图像为拍摄预览界面显示的图像的情况下,所述电子设备还包括:移动模块;
    所述第二输入为用户将所述成像框移动至所述缩略图中的所述目标对象所在的位置的输入;所述移动模块,用于在所述显示模块将所述第二预览图像更新为第三预览图像之前,根据所述成像框的移动,移动所述电子设备的摄像头;
    或者,
    所述第二输入为用户在所述缩略图中对所述目标对象的输入;所述移动模块,用于在所述显示模块将所述第二预览图像更新为第三预览图像之前,根据所述缩略图中的所述目标对象和所述成像框的位置关系,移动所述电子设备的摄像头。
  16. 根据权利要求11至13和15中任一项所述的电子设备,其中,所述第一输入为增大变焦倍数的输入,所述第一输入包括第一目标输入和第二目标输入,所述第二目标输入为用户对所述成像框的输入;
    所述显示模块,具体用于响应于所述第一目标输入,将所述第一预览图像更新为第四预览图像,并在所述第四预览图像上显示所述缩略图;响应于所述第二目标输入,将所述第四预览图像更新为所述第二预览图像,并在所述第二预览图像上显示所述缩略图。
  17. 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至8中任一项所述的图像显示方法的步骤。
  18. 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至8中任一项所述的图像显示方法的步骤。
PCT/CN2020/112693 2019-10-30 2020-08-31 图像显示方法及电子设备 WO2021082711A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911046260.1 2019-10-30
CN201911046260.1A CN110908558B (zh) 2019-10-30 2019-10-30 一种图像显示方法及电子设备

Publications (1)

Publication Number Publication Date
WO2021082711A1 true WO2021082711A1 (zh) 2021-05-06

Family

ID=69815087

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112693 WO2021082711A1 (zh) 2019-10-30 2020-08-31 图像显示方法及电子设备

Country Status (2)

Country Link
CN (1) CN110908558B (zh)
WO (1) WO2021082711A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222064A (zh) * 2021-12-16 2022-03-22 Oppo广东移动通信有限公司 辅助拍照方法、装置、终端设备及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110908558B (zh) * 2019-10-30 2022-10-18 维沃移动通信(杭州)有限公司 一种图像显示方法及电子设备
CN111010506A (zh) 2019-11-15 2020-04-14 华为技术有限公司 一种拍摄方法及电子设备
CN113497888B (zh) * 2020-04-07 2023-05-02 华为技术有限公司 照片预览方法、电子设备和存储介质
CN113542574A (zh) * 2020-04-15 2021-10-22 华为技术有限公司 变焦下的拍摄预览方法、终端、存储介质及电子设备
CN112181548B (zh) * 2020-08-25 2024-04-30 北京中联合超高清协同技术中心有限公司 一种显示器和图像显示方法
CN113014798A (zh) * 2021-01-27 2021-06-22 维沃移动通信有限公司 图像显示方法、装置及电子设备
CN112954220A (zh) * 2021-03-03 2021-06-11 北京蜂巢世纪科技有限公司 图像预览方法及装置、电子设备、存储介质
CN113067982A (zh) * 2021-03-29 2021-07-02 联想(北京)有限公司 一种采集图像显示方法及电子设备
WO2023220957A1 (zh) * 2022-05-18 2023-11-23 北京小米移动软件有限公司 图像处理方法、装置、移动终端及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102348059A (zh) * 2010-07-27 2012-02-08 三洋电机株式会社 摄像装置
CN103327298A (zh) * 2013-05-29 2013-09-25 山西绿色光电产业科学技术研究院(有限公司) 全景高速球一体机的联动浏览方法
WO2014108147A1 (de) * 2013-01-08 2014-07-17 Audi Ag Zoomen und verschieben eines bildinhalts einer anzeigeeinrichtung
US20160035065A1 (en) * 2011-07-12 2016-02-04 Apple Inc. Multifunctional environment for image cropping
CN105872349A (zh) * 2015-01-23 2016-08-17 中兴通讯股份有限公司 拍摄方法、拍摄装置及移动终端
CN106055247A (zh) * 2016-05-25 2016-10-26 努比亚技术有限公司 一种图片显示装置、方法和移动终端
CN107765964A (zh) * 2017-09-07 2018-03-06 深圳岚锋创视网络科技有限公司 一种浏览全景影像文件的局部区域的方法、装置及便携式终端
CN110908558A (zh) * 2019-10-30 2020-03-24 维沃移动通信(杭州)有限公司 一种图像显示方法及电子设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100134641A1 (en) * 2008-12-01 2010-06-03 Samsung Electronics Co., Ltd. Image capturing device for high-resolution images and extended field-of-view images
JP5436975B2 (ja) * 2009-08-21 2014-03-05 オリンパスイメージング株式会社 カメラ、カメラの表示制御方法、表示装置、及び表示方法
JP5754119B2 (ja) * 2010-12-07 2015-07-29 ソニー株式会社 情報処理装置、情報処理方法及びプログラム
WO2013089190A1 (ja) * 2011-12-16 2013-06-20 オリンパスイメージング株式会社 撮像装置及びその撮像方法、コンピュータにより処理可能な追尾プログラムを記憶する記憶媒体
KR20160029536A (ko) * 2014-09-05 2016-03-15 엘지전자 주식회사 이동 단말기 및 그것의 제어방법
KR102292985B1 (ko) * 2015-08-10 2021-08-24 엘지전자 주식회사 이동 단말기 및 그 제어방법
CN106131394A (zh) * 2016-06-15 2016-11-16 青岛海信移动通信技术股份有限公司 一种拍照的方法及装置
CN106534675A (zh) * 2016-10-17 2017-03-22 努比亚技术有限公司 一种微距拍摄背景虚化的方法及终端
CN108307111A (zh) * 2018-01-22 2018-07-20 努比亚技术有限公司 一种变焦拍照方法、移动终端及存储介质
CN108881723A (zh) * 2018-07-11 2018-11-23 维沃移动通信有限公司 一种图像预览方法及终端
CN109743498B (zh) * 2018-12-24 2021-01-08 维沃移动通信有限公司 一种拍摄参数调整方法及终端设备
CN110209325A (zh) * 2019-05-07 2019-09-06 高新兴科技集团股份有限公司 一种3d场景显示控制方法、系统及设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102348059A (zh) * 2010-07-27 2012-02-08 三洋电机株式会社 摄像装置
US20160035065A1 (en) * 2011-07-12 2016-02-04 Apple Inc. Multifunctional environment for image cropping
WO2014108147A1 (de) * 2013-01-08 2014-07-17 Audi Ag Zoomen und verschieben eines bildinhalts einer anzeigeeinrichtung
CN103327298A (zh) * 2013-05-29 2013-09-25 山西绿色光电产业科学技术研究院(有限公司) 全景高速球一体机的联动浏览方法
CN105872349A (zh) * 2015-01-23 2016-08-17 中兴通讯股份有限公司 拍摄方法、拍摄装置及移动终端
CN106055247A (zh) * 2016-05-25 2016-10-26 努比亚技术有限公司 一种图片显示装置、方法和移动终端
CN107765964A (zh) * 2017-09-07 2018-03-06 深圳岚锋创视网络科技有限公司 一种浏览全景影像文件的局部区域的方法、装置及便携式终端
CN110908558A (zh) * 2019-10-30 2020-03-24 维沃移动通信(杭州)有限公司 一种图像显示方法及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114222064A (zh) * 2021-12-16 2022-03-22 Oppo广东移动通信有限公司 辅助拍照方法、装置、终端设备及存储介质

Also Published As

Publication number Publication date
CN110908558B (zh) 2022-10-18
CN110908558A (zh) 2020-03-24

Similar Documents

Publication Publication Date Title
WO2021082711A1 (zh) 图像显示方法及电子设备
WO2021083052A1 (zh) 对象分享方法及电子设备
WO2021104365A1 (zh) 对象分享方法及电子设备
WO2020156466A1 (zh) 拍摄方法及终端设备
WO2021104195A1 (zh) 图像显示方法及电子设备
WO2021218902A1 (zh) 显示控制方法、装置及电子设备
WO2020063091A1 (zh) 一种图片处理方法及终端设备
WO2021057337A1 (zh) 操作方法及电子设备
WO2020258929A1 (zh) 文件夹界面切换方法及终端设备
WO2021115479A1 (zh) 显示控制方法及电子设备
WO2021083087A1 (zh) 截屏方法及终端设备
WO2021098603A1 (zh) 预览画面显示方法及电子设备
WO2020151460A1 (zh) 对象处理方法及终端设备
WO2021098697A1 (zh) 屏幕显示的控制方法及电子设备
WO2020151525A1 (zh) 消息发送方法及终端设备
WO2021129536A1 (zh) 图标移动方法及电子设备
WO2021121398A1 (zh) 一种视频录制方法及电子设备
WO2021057290A1 (zh) 信息控制方法及电子设备
WO2021082744A1 (zh) 视频查看方法及电子设备
WO2021129537A1 (zh) 电子设备控制方法及电子设备
WO2021115279A1 (zh) 图片显示方法及电子设备
CN111596990B (zh) 一种图片显示方法及装置
WO2020155980A1 (zh) 控制方法及终端设备
WO2020215982A1 (zh) 桌面图标管理方法及终端设备
WO2021175143A1 (zh) 图片获取方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20883156

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20883156

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20883156

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 28/02/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20883156

Country of ref document: EP

Kind code of ref document: A1