CN114554098A - Display method, display device, electronic apparatus, and readable storage medium - Google Patents
Display method, display device, electronic apparatus, and readable storage medium Download PDFInfo
- Publication number
- CN114554098A CN114554098A CN202210206093.8A CN202210206093A CN114554098A CN 114554098 A CN114554098 A CN 114554098A CN 202210206093 A CN202210206093 A CN 202210206093A CN 114554098 A CN114554098 A CN 114554098A
- Authority
- CN
- China
- Prior art keywords
- target area
- display
- image
- target
- window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000001514 detection method Methods 0.000 claims description 3
- 230000000875 corresponding effect Effects 0.000 description 28
- 230000006870 function Effects 0.000 description 18
- 230000008569 process Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 16
- 239000002537 cosmetic Substances 0.000 description 11
- 230000004044 response Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 6
- 230000003321 amplification Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000005429 filling process Methods 0.000 description 5
- 238000003199 nucleic acid amplification method Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 208000002874 Acne Vulgaris Diseases 0.000 description 1
- 208000003351 Melanosis Diseases 0.000 description 1
- 206010000496 acne Diseases 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 210000000216 zygoma Anatomy 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a display method, a display device, electronic equipment and a readable storage medium, and belongs to the technical field of display. The display method comprises the following steps: collecting and displaying a preview image; receiving a first input of a target location in the preview image; and responding to the first input, determining a target area according to the target position, and displaying an image of the target area in an enlarged mode, wherein the image of the target area displayed in the enlarged mode is not overlapped with the target area.
Description
Technical Field
The application belongs to the technical field of display, and particularly relates to a display method, a display device, an electronic device and a readable storage medium.
Background
At present, with the improvement of the image capability of the mobile terminal, a user can preview and display images by using a camera anytime and anywhere, and then the appearance instrument is arranged. However, the conventional image preview display function is generally only a whole display, and is limited by the display size of the mobile terminal, and the details of the screen are not clear enough, so that the use is inconvenient.
Disclosure of Invention
The embodiment of the application aims to provide a display method, a display device, electronic equipment and a readable storage medium, and can solve the problems that the existing image preview display function is generally only integrally displayed and is limited by the display size of a mobile terminal, the picture details are not clear enough, and the use is inconvenient.
In a first aspect, an embodiment of the present application provides a display method, where the method includes:
collecting and displaying a preview image;
receiving a first input of a target location in the preview image;
and responding to the first input, determining a target area according to the target position, and displaying an image of the target area in an enlarged mode, wherein the image of the target area displayed in the enlarged mode is not overlapped with the target area.
In a second aspect, an embodiment of the present application provides a display device, including:
the first display module is used for acquiring and displaying the preview image;
a first receiving module for receiving a first input of a target position in the preview image;
and the second display module is used for responding to the first input, determining a target area according to the target position, and displaying the image of the target area in an enlarged mode, wherein the image of the target area displayed in the enlarged mode is not overlapped with the target area.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the preview image is collected and displayed, the target area needing to be amplified is selected through the first input of the target position in the preview image, the image of the target area is then amplified and displayed, the image obtained after amplifying any area in the preview image can be conveniently checked, the local detail is displayed clearly, the function of a cosmetic mirror is enriched, the user experience is improved, the image of the target area amplified and displayed is not overlapped with the target area in the preview image, namely, the image of the target area amplified and displayed is arranged outside the target area, the shielding of the target area can be avoided, and the situation that a user can intuitively compare the target area before and after amplification is ensured.
Drawings
Fig. 1 is a schematic flowchart of a display method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a preview image provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a first input provided by an embodiment of the present application;
FIG. 4 is a schematic view of a target area and a window provided in an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a target position corresponding to a first input as a position of a nose according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a target region and a window determined according to a feature according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a filling process provided in an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating the use of two windows to simultaneously display a target area according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a display device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The display method, the display apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a schematic flow chart of a display method according to an embodiment of the present disclosure. As shown in fig. 1, an embodiment of an aspect of the present application provides a display method applied to an electronic device, where the display method includes the following steps:
step 101: and collecting and displaying the preview image.
Optionally, before step 101, the display method further includes: receiving an opening input of a user, that is, step 101 is performed in response to an opening input, where the opening input may be an input for opening a cosmetic mirror function of the electronic device, for example, a first input is a click input, a long-press input, a drag input, and the like to a cosmetic mirror control at a camera interface, or the opening input may also be an input for opening a cosmetic mirror application, and the like, and the electronic device will open the cosmetic mirror function or open the cosmetic mirror application in response to the opening input, and a preview image is collected and displayed on a screen of the electronic device through a default camera, where the default camera is usually a front camera of the electronic device, and the preview image is dynamically updated in real time.
Referring to fig. 2, fig. 2 is a schematic view of a preview image according to an embodiment of the present disclosure. As shown in fig. 2, the electronic device displays a preview image captured by the camera on its screen in full screen.
Step 102: receiving a first input of a target location in the preview image;
step 103: and responding to the first input, determining a target area according to the target position, and displaying an image of the target area in an enlarged mode, wherein the image of the target area displayed in the enlarged mode is not overlapped with the target area.
In step 102, optionally, the first input may be an input of clicking or long-pressing a target position in a preview image displayed on the screen of the electronic device, that is, the first input may be an input on the screen of the electronic device; the first input may also be a gesture input, for example, the first input is a gesture input in which a user finger points to a user real face or is in contact with the user real face, and the first input also appears in a corresponding user finger image in the preview image, which corresponds to a mirror image relationship. Therefore, the user can select any area of the human face which is desired to be magnified for viewing through the first input, so that the user can observe the details of the face.
Referring to fig. 3, fig. 3 is a schematic diagram of a first input provided in an embodiment of the present application. As shown in fig. 3, when the user points at a certain position of the preview image by the user finger 301, the electronic device may detect the user finger 301 by an object recognition technique or a gesture recognition technique, and determine a position pointed by the user finger 301 or a position touched by the user finger 301 as a target position.
In step 103, if the electronic device receives the first input, the electronic device determines the target region according to the target position in response to the first input, for example, if the first input is an input on a screen of the electronic device, the target region in the preview image may be determined according to the target position of the first input on the screen of the electronic device, or, if the first input is a gesture input, the target position in the preview image pointed by the finger of the user may be determined by a gesture recognition technology, and then the target region in the preview image may be determined according to the target position.
After the target area is determined, the image of the target area can be enlarged and displayed on the display screen of the electronic equipment. That is to say, through amplifying and displaying any target area determined by the user, the user can conveniently and clearly view details of any position of the face, and therefore the use experience of the user in scenes such as makeup is improved. In addition, the image of the target area which is amplified and displayed is not overlapped with the target area selected by the user, namely the image of the target area which is amplified and displayed is arranged outside the target area, so that the shielding of the target area can be avoided, and the condition that the user can intuitively compare the conditions before and after the target area is amplified can be ensured.
Alternatively, when the image of the target area is displayed in an enlarged manner, a window may pop up and the image of the target area may be displayed in the window in an enlarged manner.
Referring to fig. 4, fig. 4 is a schematic view of a target area and a window provided in the present application. As shown in fig. 4, in some embodiments of the application, a target area 302 may be determined according to a target position pointed by a user finger 301 or a target area contacted by the user finger 301, then a window 303 is displayed in another area outside the target area 302, where the window 303 is used for displaying an image of the target area 302 in an enlarged manner, the target area 302 may be circled, and the target area 302 and the corresponding window 303 may be connected to indicate a corresponding relationship therebetween, and since the target area 302 and the window 303 are not overlapped, the target area may be prevented from being blocked, and it is ensured that a user can intuitively compare conditions before and after the target area is enlarged.
In some embodiments of the present application, the window may be set to be displayed on the top of the screen of the electronic device, that is, the layer where the window is located on the layer where the preview image is located. The window can be positioned in a display area of the preview image except the target area, and also can be positioned outside a face display area of the preview image, so that a user can check the complete face.
In other embodiments of the present application, the magnification of the image of the target area may be a preset magnification, which may be preset by a user, or may be a default value.
Therefore, in the embodiment of the application, the preview image is collected and displayed, the target area needing to be amplified is selected through the first input of the target position in the preview image, the image of the target area is amplified and displayed, the amplified image of any area in the preview image can be conveniently checked, the local detail is displayed clearly, the function of a cosmetic mirror is enriched, the user experience is improved, the image of the amplified and displayed target area is not overlapped with the target area in the preview image, namely, the amplified and displayed image of the target area is arranged outside the target area, the shielding of the target area can be avoided, and the situation that the user can intuitively compare the target area before and after amplification is ensured.
In some embodiments of the present application, when determining the target area according to the target position, an area within a preset range centered on the target position may be determined as the target area, the preset range may be a circular area range, a rectangular area range, or the like centered on the target position, and parameters such as a specific size thereof may be set according to actual requirements.
In still other embodiments of the present application, the determining a target area according to the target position includes:
detecting whether a characteristic part exists in a preset area range with the target position as a center;
and under the condition that the characteristic part exists in the preset area range, determining the target area according to the outline of the characteristic part.
Because the target area is framed in a fixed circular shape, a rectangular shape, or the like, part of the area that the user does not want to pay attention to is usually selected and then enlarged for display, and thus the part that the user wants to enlarge and view cannot be effectively highlighted. Therefore, in the embodiment of the present application, when a target area is determined according to a target position, whether a feature portion exists in a preset area range centered on the target position may be detected, where the size of the preset area range may be preset, the feature portion may be detected and identified by using a face five sense organs identification technology, an object identification technology, and the like, the feature portion may be five sense organs such as eyes, a nose, a mouth, and the like, and may also be some portion having a contour in a face, such as a position of a cheekbone, a freckle, a acne mark, and the like.
If a feature is detected within a predetermined area centered on the target position, the target area may be determined based on the outline of the feature, for example, an area surrounded by the outline boundary of the feature or an area surrounded by a minimum circle or a minimum rectangle that can enclose the outline of the feature may be used as the target area. Therefore, the size of the target area can be determined by combining the outline of the specific characteristic part which the user wants to magnify and view, so that the situation that the part which is not focused by the user is magnified is avoided, and the characteristic part can be more prominently displayed.
Referring to fig. 5 and 6, fig. 5 is a schematic diagram illustrating a position of a nose corresponding to a target position of a first input provided by an embodiment of the present application, and fig. 6 is a schematic diagram illustrating a target region and a window determined according to a feature portion provided by an embodiment of the present application. As shown in fig. 5 and 6, for example, the first input of the user is an input of pointing the finger 301 of the user to the target position, the target position is a position where the nose is located, the electronic device will detect whether there is a feature part within a preset region range centered on the target position in response to the first input, and if there is a feature part nose in fig. 5, when determining the target region 302, the shape and size of the target region 302 may be determined according to the outline of the feature part nose, and if fig. 5 completely encloses the outline of the feature part nose with the smallest rectangle, the rectangular region is the target region 302, and the window 303 for enlarging and displaying the target region 302 also has the same shape as the target region 302, except that the size of the window is set larger than the target region 302.
In still other embodiments of the present application, after receiving the first input of the target location in the preview image, the method further includes:
and performing first display processing on other display areas except the display area for enlarging and displaying the image of the target area and the target area in a preview interface, wherein the first display processing comprises any one of hiding processing, blurring processing and filling processing.
In this embodiment, in order to enable a user to better view details of a target area, so that the details of the target area are more prominent, and avoid interference by a picture that is not wanted to be paid attention by other users, a first display process may be performed on a display area, other than the display area for displaying an image of the target area in an enlarged manner and the target area, in a preview interface of an electronic device for displaying a preview image, where the first display process may use a technique such as gaussian blur, and may specifically be any one of a hiding process, a blurring process, and a filling process; the hiding process is to hide the image in the corresponding area from display, for example, to cut out, the blurring process is to change the brightness, color, etc. of the image in the corresponding area from dark to light, and the filling process is to fill the image in the corresponding area into a solid background or other image contents, for example, a black background, a white background, a gray background, a sticker, etc.
Referring to fig. 7, fig. 7 is a schematic diagram of a filling process according to an embodiment of the present disclosure. As shown in fig. 6 and 7, in this embodiment, filling processing is performed on other display areas except for the target area 302 and the window 303 in the display screen of the electronic device, that is, filling portions in fig. 6 and 7, so that the target area 302 and the window 303 can be more prominently displayed and are convenient for a user to view.
In other embodiments of the present application, the displaying the image of the target area in an enlarged manner includes:
respectively displaying the images of the target area in the first window and the second window according to a preset magnification;
the display frame in the first window is fixed to be an image of a target area in a target frame and is unchanged, the display frame in the second window is updated in real time, the first window and the second window are not overlapped with the target area, and the target frame is determined according to the time when the first input is received.
In this embodiment, optionally, after the user selects a certain target area, the two windows may be respectively used for comparison and display, so as to facilitate the user to compare the state of the target area before and after makeup. When the image of the target area is displayed in the first window according to the preset magnification, the image in the first window is fixed, that is, the image is fixed as the image of the target area in the target frame image, and the target frame image may be determined according to the time when the electronic device receives the first input, for example, the frame preview image corresponding to the time when the first input is received is determined as the target frame image; when the image of the target area is displayed in the second window according to the preset magnification, the picture in the second window is dynamically updated in real time, namely, the camera acquires the image of the target area in real time and displays the image in the second window in an enlarged manner, and if a user makes up in the target area, the corresponding makeup effect can be seen in the second window. Therefore, the user can conveniently check the states of the target area before and after makeup to determine whether the makeup effect is satisfactory or not, and the use experience of the function of the cosmetic mirror is improved.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating that two windows are adopted to simultaneously display a target area according to an embodiment of the present application. As shown in fig. 8, in some embodiments, the window for enlarging the display target region 302 includes a first window 3031 and a second window 3032, wherein the image in the first window 3031 is a stop motion picture, and the image in the second window 3032 is a real-time dynamic image.
In still other embodiments of the present application, after displaying the target area in the window according to the preset magnification, the method further includes:
and recording and storing the enlarged and displayed image of the target area.
For example, when the image of the target area is enlarged and displayed in the form of a window, the display picture in the window may be recorded, and the recorded and saved window is a window for dynamically updating the picture in real time. By recording and storing the display picture in the window, a makeup video file of the target area can be obtained, and the makeup video file stores the makeup steps of the user on the target area, so that the user can conveniently share the makeup video file, for example, the makeup video file is sent to other users and uploaded to a target social platform.
In some embodiments of the present application, after the magnifying and displaying the image of the target area, the method further includes:
determining an image of a target area corresponding to a gesture when the gesture of a user finger is detected to meet a first preset gesture condition;
and adjusting the magnification of the image of the target area corresponding to the gesture into a target magnification, wherein the target magnification is determined according to the gesture.
In the embodiment, input on a screen of the electronic device is not required, and the magnification of the image displayed in the window can be conveniently adjusted through gestures, so that a user can better view details of the corresponding area. For example, the user may preset a first preset gesture condition for adjusting the magnification, for example, the gesture for setting up the magnification is that an included angle formed by two user fingers is greater than a certain preset included angle threshold, and the gesture for setting down the magnification is that an included angle formed by two user fingers is less than a certain preset included angle threshold, or the gesture for setting up the magnification is that a single user finger points to a first preset orientation, the gesture for setting down the magnification is that a single user finger points to a second preset orientation, and so on, and then in a case that the gesture of the user finger detected to exist in front of the face meets the first preset gesture condition, determine the image of the target area corresponding to the gesture. For example, if the electronic device currently displays only an image of one target area, that is, an image of a target area corresponding to the gesture, and if the electronic device currently displays images of at least two target areas, the image of the target area including the finger of the user may be determined as the image of the target area corresponding to the gesture. Optionally, if the images of at least two target areas each include a finger of the user, the image of the target area including the most fingers of the user may be determined as the image of the target area corresponding to the gesture, so that the problem that the image of the target area to be adjusted by the user cannot be accurately identified can be solved. It can be known that, when the image of the target area is displayed in a window in a magnified manner, for example, when there are multiple windows, the displayed content in different windows corresponds to the image of different target areas, and similarly, the target window corresponding to the gesture, that is, the object whose magnification is to be adjusted, can be determined according to the above manner.
After determining the image of the target area corresponding to the gesture of the finger of the user, the magnification of the image of the target area may be adjusted to the target magnification, wherein the target magnification is determined according to the gesture, for example, when the gesture for setting the magnification down is that the included angle formed by the fingers of the two users is smaller than a certain preset included angle threshold value, the included angle formed by the fingers of the two users and the target magnification are set to be positively correlated, that is, the smaller the included angle formed by the fingers of the two users is, the smaller the target magnification is, the larger the included angle formed by the fingers of the two users is, and the larger the target magnification is.
In other embodiments of the present application, after the magnifying and displaying the image of the target area, the method further includes:
determining an image of a target area corresponding to a gesture when the gesture of a user finger is detected to meet a second preset gesture condition;
adjusting a display size of an image of a target area corresponding to the gesture to a target size, wherein the target size is determined according to the gesture.
In this embodiment, input on a screen of the electronic device is not required, and the display size of the image of the target area can be conveniently adjusted through a gesture, so that a user can better view details of the corresponding area. For example, the user may preset a second preset gesture condition for adjusting the display size of the image of the target region, for example, the gesture for setting the display size of the image in the target area to be increased is that the included angle formed by two non-adjacent fingers of the three user fingers is greater than a preset included angle threshold, and the gesture for setting the display size of the image in the target area to be reduced is that the included angle formed by two nonadjacent fingers in the three user fingers is smaller than a certain preset included angle threshold value, or setting the gesture of the magnification to be higher than the preset magnification of the single user finger, setting the gesture of the magnification to be lower than the preset magnification of the single user finger, and the like, and determining the display size of the image of the target area corresponding to the gesture under the condition that the gesture of the user finger in front of the detected face meets a second preset gesture condition. For example, if the electronic device currently displays only an image of one target area, that is, an image of a target area corresponding to the gesture, and if the electronic device currently displays images of at least two target areas, the image of the target area including the finger of the user may be determined as the image of the target area corresponding to the gesture. Alternatively, if the images of at least two target areas each include a finger of the user, the image of the target area including the largest number of fingers of the user may be determined as the image of the target area corresponding to the gesture, so that the problem that the image of the target area to be enlarged by the user cannot be accurately identified can be solved. It can be known that, when the image of the target area is displayed in a window in a magnified manner, for example, when there are multiple windows, the display contents in different windows correspond to images of different target areas, and similarly, the target window corresponding to the gesture, that is, the object whose display size is to be adjusted, can be determined according to the above manner.
After determining the image of the target area corresponding to the gesture of the user finger, the display size of the image of the target area may be adjusted to the target size, where the target size is determined according to the gesture, for example, when the gesture for setting the display size to be increased is greater than a certain preset included angle threshold value for the included angle formed by two nonadjacent fingers of the three user fingers, the included angle formed by the two nonadjacent fingers of the three user fingers and the target size may be set to be positively correlated, that is, the smaller the included angle formed by the two nonadjacent fingers of the three user fingers is, the smaller the target size is, the larger the included angle formed by the two nonadjacent fingers of the three user fingers is, and the larger the target size is.
In some embodiments of the present application, after the magnifying and displaying the image of the target area, the method further includes:
receiving a third input of the user to the image of the target area displayed in an enlarged mode;
in response to the third input, unmagnifying display of the image of the target area.
That is, the user can cancel the enlargement display of the image of any one of the target areas. For example, when the image of the target area is displayed in a window in an enlarged manner, the user may close any window autonomously. For example, if a third input to the window is received from the user, for example, the third input is an input of clicking a close control of the window, an input of long-pressing the window, and an input of dragging the window to a preset position, or the third input is a gesture input, the gesture does not contact with a screen of the electronic device, and the gesture may be preset, the electronic device cancels display of the window in response to the third input, where when the display of the window is cancelled, all windows may be cancelled, or only one window may be cancelled, for example, only a window determined according to the third input is cancelled.
In a word, in the embodiment of the application, the preview image is collected and displayed, the target area needing to be amplified is selected through the first input of the target position in the preview image, the image of the target area is amplified and displayed, the amplified image of any area in the preview image can be conveniently checked, the local detail is displayed clearly, the function of a cosmetic mirror is enriched, the user experience is improved, the image of the amplified and displayed target area is not overlapped with the target area in the preview image, namely, the amplified and displayed image of the target area is arranged outside the target area, the shielding of the target area can be avoided, and the situation that the user can intuitively compare the target area before and after amplification is ensured.
According to the display method provided by the embodiment of the application, the execution main body can be a display device. In the embodiment of the present application, a display device executing a display method is taken as an example, and the display device provided in the embodiment of the present application is described.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a display device according to an embodiment of the present disclosure. As shown in fig. 9, another embodiment of the present application further provides a display device, where the display device 900 includes:
a first display module 901, configured to collect and display a preview image;
a first receiving module 902, configured to receive a first input of a target location in the preview image;
a second display module 903, configured to determine, in response to the first input, a target area according to the target position, and enlarge and display an image of the target area, where the enlarged and displayed image of the target area does not overlap with the target area.
Optionally, the second display module comprises:
a detection unit for detecting whether there is a characteristic portion in a preset area range centered on the target position;
and the determining unit is used for determining the target area according to the contour of the characteristic part under the condition that the characteristic part exists in the preset area range.
Optionally, the display device further comprises:
and the third display module is used for performing first display processing on other display areas except the display area for displaying the image of the target area in an enlarged manner and the target area in the preview interface, wherein the first display processing comprises any one of hiding processing, blurring processing and filling processing.
Optionally, the second display module comprises:
the display unit is used for displaying the images of the target area in the first window and the second window according to the preset magnification;
the display frame in the first window is fixed to be an image of a target area in a target frame and is unchanged, the display frame in the second window is updated in real time, the first window and the second window are not overlapped with the target area, and the target frame is determined according to the time when the first input is received.
Optionally, the display device further comprises:
and the recording module is used for recording and storing the amplified and displayed image of the target area.
In the embodiment of the application, the preview image is collected and displayed, the target area needing to be amplified is selected through the first input of the target position in the preview image, the image of the target area is then amplified and displayed, the image obtained after amplifying any area in the preview image can be conveniently checked, the local detail is displayed clearly, the function of a cosmetic mirror is enriched, the user experience is improved, the image of the target area amplified and displayed is not overlapped with the target area in the preview image, namely, the image of the target area amplified and displayed is arranged outside the target area, the shielding of the target area can be avoided, and the situation that a user can intuitively compare the target area before and after amplification is ensured.
The display device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The display device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 8, and is not described here again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 1000 is further provided in this embodiment of the present application, and includes a processor 1001 and a memory 1002, where the memory 1002 stores a program or an instruction that can be executed on the processor 1001, and when the program or the instruction is executed by the processor 1001, the steps of the display method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: a radio frequency unit 1101, a network module 1102, an audio output unit 1103, an input unit 1104, a sensor 1105, a display unit 1106, a user input unit 1107, an interface unit 1108, a memory 1109, a processor 1110, and the like.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The sensor 1105 is used for acquiring a preview image;
a display unit 1106 for displaying a preview image;
a user input unit 1107 for a first input by a user of a target position in the preview image;
a processor 1110 for determining, in response to the first input, a target region from the target location,
the display unit 1106 is further configured to display an enlarged image of the target area, where the enlarged image of the target area does not overlap with the target area.
In the embodiment of the application, the preview image is collected and displayed, the target area needing to be amplified is selected through the first input of the target position in the preview image, the image of the target area is then amplified and displayed, the image obtained after amplifying any area in the preview image can be conveniently checked, the local detail is displayed clearly, the function of a cosmetic mirror is enriched, the user experience is improved, the image of the target area amplified and displayed is not overlapped with the target area in the preview image, namely, the image of the target area amplified and displayed is arranged outside the target area, the shielding of the target area can be avoided, and the situation that a user can intuitively compare the target area before and after amplification is ensured.
Optionally, the processor 1110 is further configured to detect whether there is a feature in a preset area range centered on the target position;
and under the condition that the characteristic part exists in the preset area range, determining the target area according to the outline of the characteristic part.
Optionally, the display unit 1106 is further configured to perform a first display process on a display area other than the display area for displaying the image of the target area in an enlarged manner and the target area in the preview interface, where the first display process includes any one of a hiding process, a blurring process, and a filling process.
Optionally, a display unit 1106 for
Respectively displaying the images of the target area in the first window and the second window according to a preset magnification;
the display frame in the first window is fixed to be an image of a target area in a target frame picture and is unchanged, the display frame in the second window is updated in real time, the first window and the second window are not overlapped with the target area, and the target frame picture is determined according to the time when the first input is received.
Optionally, the processor 1110 is further configured to record and save the image of the target area displayed in an enlarged manner.
It should be understood that in the embodiment of the present application, the input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. A touch panel 11071, also called a touch screen. The touch panel 11071 may include two portions of a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1109 may be used to store software programs and various data, and the memory 1109 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, an application program or instructions (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like. Further, the memory 1109 may include volatile memory or nonvolatile memory, or the memory 1109 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1109 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read-only memory, a random access memory, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing display method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatuses in the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions recited, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (12)
1. A display method, comprising:
collecting and displaying a preview image;
receiving a first input of a target location in the preview image;
and responding to the first input, determining a target area according to the target position, and displaying an image of the target area in an enlarged mode, wherein the image of the target area displayed in the enlarged mode is not overlapped with the target area.
2. The display method according to claim 1, wherein the determining a target area according to the target position comprises:
detecting whether a characteristic part exists in a preset area range taking the target position as a center;
and under the condition that the characteristic part exists in the preset area range, determining the target area according to the outline of the characteristic part.
3. The display method according to claim 1, wherein after receiving the first input of the target position in the preview image, further comprising:
and performing first display processing on other display areas except the display area for enlarging and displaying the image of the target area and the target area in a preview interface, wherein the first display processing comprises any one of hiding processing, blurring processing and filling processing.
4. The display method according to claim 1, wherein the displaying the image of the target area in an enlarged manner includes:
respectively displaying the images of the target area in the first window and the second window according to a preset magnification;
the display frame in the first window is fixed to be an image of a target area in a target frame and is unchanged, the display frame in the second window is updated in real time, the first window and the second window are not overlapped with the target area, and the target frame is determined according to the time when the first input is received.
5. The display method according to claim 1, wherein after the magnifying and displaying the image of the target area, further comprising:
and recording and storing the enlarged and displayed image of the target area.
6. A display device, comprising:
the first display module is used for acquiring and displaying the preview image;
a first receiving module for receiving a first input of a target position in the preview image;
and the second display module is used for responding to the first input, determining a target area according to the target position and displaying the image of the target area in an enlarged mode, wherein the image of the target area displayed in the enlarged mode is not overlapped with the target area.
7. The display device according to claim 6, wherein the second display module comprises:
a detection unit for detecting whether a characteristic part exists in a preset area range with the target position as a center;
and the determining unit is used for determining the target area according to the contour of the characteristic part under the condition that the characteristic part exists in the preset area range.
8. The display device according to claim 6, further comprising:
and the third display module is used for performing first display processing on other display areas except the display area for displaying the image of the target area in an enlarged manner and the target area in the preview interface, wherein the first display processing comprises any one of hiding processing, blurring processing and filling processing.
9. The display device according to claim 6, wherein the second display module comprises:
the display unit is used for displaying the images of the target area in the first window and the second window according to the preset magnification;
the display frame in the first window is fixed to be an image of a target area in a target frame and is unchanged, the display frame in the second window is updated in real time, the first window and the second window are not overlapped with the target area, and the target frame is determined according to the time when the first input is received.
10. The display device according to claim 6, further comprising:
and the recording module is used for recording and storing the amplified and displayed image of the target area.
11. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the display method according to any one of claims 1-5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the display method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210206093.8A CN114554098A (en) | 2022-02-28 | 2022-02-28 | Display method, display device, electronic apparatus, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210206093.8A CN114554098A (en) | 2022-02-28 | 2022-02-28 | Display method, display device, electronic apparatus, and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114554098A true CN114554098A (en) | 2022-05-27 |
Family
ID=81660726
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210206093.8A Pending CN114554098A (en) | 2022-02-28 | 2022-02-28 | Display method, display device, electronic apparatus, and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114554098A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113038008A (en) * | 2021-03-08 | 2021-06-25 | 维沃移动通信有限公司 | Imaging method, imaging device, electronic equipment and storage medium |
US20220007816A1 (en) * | 2020-07-07 | 2022-01-13 | Perfect Mobile Corp. | System and method for navigating user interfaces using a hybrid touchless control mechanism |
-
2022
- 2022-02-28 CN CN202210206093.8A patent/CN114554098A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220007816A1 (en) * | 2020-07-07 | 2022-01-13 | Perfect Mobile Corp. | System and method for navigating user interfaces using a hybrid touchless control mechanism |
CN113038008A (en) * | 2021-03-08 | 2021-06-25 | 维沃移动通信有限公司 | Imaging method, imaging device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11706521B2 (en) | User interfaces for capturing and managing visual media | |
US11770601B2 (en) | User interfaces for capturing and managing visual media | |
CN110896451B (en) | Preview picture display method, electronic device and computer readable storage medium | |
CN112035877A (en) | Information hiding method and device, electronic equipment and readable storage medium | |
CN114422692A (en) | Video recording method and device and electronic equipment | |
CN115357158A (en) | Message processing method and device, electronic equipment and storage medium | |
CN112911147A (en) | Display control method, display control device and electronic equipment | |
CN114779977A (en) | Interface display method and device, electronic equipment and storage medium | |
CN116107531A (en) | Interface display method and device | |
CN113794831B (en) | Video shooting method, device, electronic equipment and medium | |
CN114518822A (en) | Application icon management method and device and electronic equipment | |
CN111796746B (en) | Volume adjusting method, volume adjusting device and electronic equipment | |
CN114995713B (en) | Display control method, display control device, electronic equipment and readable storage medium | |
CN115562539A (en) | Control display method and device, electronic equipment and readable storage medium | |
CN116244028A (en) | Interface display method and device and electronic equipment | |
CN115756238A (en) | Component display method and device, electronic equipment and medium | |
CN115617225A (en) | Application interface display method and device, electronic equipment and storage medium | |
CN114554098A (en) | Display method, display device, electronic apparatus, and readable storage medium | |
CN114895815A (en) | Data processing method and electronic equipment | |
CN114546576A (en) | Display method, display device, electronic apparatus, and readable storage medium | |
CN114442881A (en) | Information display method and device, electronic equipment and readable storage medium | |
CN114827737A (en) | Image generation method and device and electronic equipment | |
CN114245017A (en) | Shooting method and device and electronic equipment | |
CN114115639A (en) | Interface control method and device, electronic equipment and storage medium | |
CN114518821A (en) | Application icon management method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |