CN114063845A - Display method, display device and electronic equipment - Google Patents

Display method, display device and electronic equipment Download PDF

Info

Publication number
CN114063845A
CN114063845A CN202111407015.6A CN202111407015A CN114063845A CN 114063845 A CN114063845 A CN 114063845A CN 202111407015 A CN202111407015 A CN 202111407015A CN 114063845 A CN114063845 A CN 114063845A
Authority
CN
China
Prior art keywords
target
display
image
displaying
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111407015.6A
Other languages
Chinese (zh)
Inventor
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111407015.6A priority Critical patent/CN114063845A/en
Publication of CN114063845A publication Critical patent/CN114063845A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04805Virtual magnifying lens, i.e. window or frame movable on top of displayed information to enlarge it for better reading or selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display method, a display device and electronic equipment, and belongs to the field of image processing. The display method comprises the following steps: displaying a target image on a display interface, wherein the target image is acquired by a camera of electronic equipment; receiving a first input of a target position of the target image by a user; in response to the first input, displaying an enlarged image including an object in a case where a target region of the target image includes an outline of the object, the target region being determined based on the target position.

Description

Display method, display device and electronic equipment
Technical Field
The application belongs to the field of image processing, and particularly relates to a display method, a display device and electronic equipment.
Background
With the increase of the functions of the electronic equipment, more and more electronic equipment is added with the function of a magnifier on a display interface. In the related art, when a user clicks a corresponding position, the position may be locally enlarged. However, in the actual use process, due to the position deviation of the touch operation, the situation that the amplification position is not consistent with the expected amplification position of the user often occurs, and the use experience of the user is affected.
Disclosure of Invention
The embodiment of the application aims to provide a display method, a display device and electronic equipment, and can solve the problems that the amplification position is not accurate and the user experience is not high.
In a first aspect, an embodiment of the present application provides a display method, where the method includes:
displaying a target image on a display interface, wherein the target image is acquired by a camera of electronic equipment;
receiving a first input of a target position of the target image by a user;
in response to the first input, displaying an enlarged image including an object in a case where a target region of the target image includes an outline of the object, the target region being determined based on the target position.
In a second aspect, an embodiment of the present application provides a display device, including:
the first display module is used for displaying a target image on a display interface, wherein the target image is acquired by a camera of the electronic equipment;
the first receiving module is used for receiving first input of a target position of the target image by a user;
a second display module to display, in response to the first input, an enlarged image including an object in a case where a target area of the target image includes an outline of the object, the target area being determined based on the target position.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the target area is generated based on the target position of the first input position of the user, and the outline of the object existing in the target area is amplified and displayed, so that the object which the user wants to amplify can be intelligently judged based on the operation of the user, the condition that the actual amplifying position does not accord with the expected amplifying position of the user due to the fact that the clicking position of the user is inaccurate is effectively avoided, a more accurate amplifying and displaying function is provided for the user, and the use experience of the user is further improved.
Drawings
Fig. 1 is a schematic flow chart of a display method provided in an embodiment of the present application;
FIG. 2 is a second schematic flowchart of a display method according to an embodiment of the present application;
FIG. 3 is a schematic interface diagram of a display method provided in an embodiment of the present application;
FIG. 4 is a second schematic interface diagram of a display method according to an embodiment of the present disclosure;
fig. 5 is a third schematic interface diagram of a display method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a display device provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 8 is a hardware schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The display method, the display apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
The display method can be applied to the terminal, and can be specifically executed by hardware or software in the terminal. The execution subject of the display method may be a terminal, or a control device of the terminal, or the like.
The terminal includes, but is not limited to, a portable communication device such as a mobile phone or a tablet computer having a display function and a photographing function. It should also be understood that in some embodiments, the terminal may not be a portable communication device, but rather a desktop computer having display and camera functions.
In the following embodiments, a terminal including a display and a photographing function is described. However, it should be understood that the terminal may include one or more other physical user interface devices such as a touch panel, a physical keyboard, a mouse, and a joystick.
The embodiment of the application provides a display method, and an execution main body of the display method can be a terminal, including but not limited to a mobile terminal, such as a mobile phone, a tablet, a camera and the like, a non-mobile terminal, or a control device of the terminal and the like.
As shown in fig. 1, the display method includes: step 110, step 120 and step 130.
Step 110, displaying a target image on a display interface, wherein the target image is acquired by a camera of the electronic equipment;
in this step, the display interface may be a camera preview interface, for example, a front camera preview interface and a rear camera preview interface, where in the case of the front camera preview interface, the display interface may also be a display interface simulating a mirror.
It is understood that, in the case of the display interface in which the display mirror surface is a simulated mirror, when the electronic device is in normal operation, the user may click on simulated mirror software installed on the electronic device to open the display interface of the simulated mirror.
The display interface comprises a target image, and the target image is an image acquired by a camera of the electronic equipment.
It is understood that the target image may be a dynamically changing image, and the dynamic change of the target image is consistent with the dynamic change of the actual object acquired by the camera.
In an actual implementation process, the camera is generally a front camera disposed on the electronic device, and the target image is an image captured by the front camera.
If a user clicks an application program for providing a mirror simulation function in the mobile phone, a front camera arranged on the mobile phone automatically acquires external image information, and displays an acquired target image on a screen of the mobile phone, such as self-shooting of the user.
Fig. 3 shows a display interface, wherein the interface includes a head portrait picture of the user himself captured by the front camera. In the case that the user adjusts the expression of the mobile phone, the target image displayed in the screen will also change correspondingly, for example, in the case that the user blinks, the person in the target image will also blink correspondingly.
Step 120, receiving a first input of a target position of a target image by a user;
in this step, the first input is used to determine the target position.
Wherein the first input may be expressed in at least one of the following ways:
first, the first input may be represented as a touch input, including but not limited to a click input, a slide input, a press input, and the like.
In this embodiment, receiving the first input of the user may be represented by receiving a touch operation of the user on a display area of a display screen of the terminal.
If the target position on the target image is touched in the state of displaying the display interface of the simulation mirror, the first input can be realized; or setting the first input as a continuous multi-tap operation on a target position of the display area within a target time interval.
Second, the first input may be represented as a physical key input.
In this embodiment, a physical button, such as a mouse, is provided on the body of the terminal, and receives a first input from the user, which may be represented by receiving a first input from the user moving or pressing the physical button; the first input may also be a combined operation of pressing a plurality of physical keys simultaneously.
Third, the first input may be represented as a voice input.
In this embodiment, the terminal may perform semantic recognition on the voice when receiving the voice, and determine the target position based on the result of the semantic recognition.
Of course, in other embodiments, the first input may also be expressed in other forms, including but not limited to a somatosensory gesture input, and the like, which may be determined according to actual needs, and the embodiments of the present application do not limit this.
In this step, the target position is the actual position of the input made by the user, such as the position where the user actually clicks on the screen.
It can be understood that, in general, when a user needs to locally enlarge a certain content on a target image, the user can click or double click on the position to enlarge the certain content, and the position is the target position.
However, in the actual execution process, there is a case where the position actually clicked by the user does not coincide with the position that the user expects to want to enlarge the display.
For example, in a case where the user needs to enlarge and display one whelk on the right face of the person in the target image as shown in fig. 3, the position of the whelk may be clicked on the screen to realize the first input, the position on the screen clicked by the user, and in this case, the target position is the position where the whelk is located.
Or, the user may also click a position near the whelk on the screen, such as clicking a position at a lower right corner of the whelk, as shown in fig. 3, in which case, the target position is a position at a lower right corner of the whelk, that is, a position on the screen actually clicked by the user.
Step 130, in response to the first input, displaying an enlarged image including the object in a case where a target area of the target image includes an outline of the object, the target area being determined based on the target position.
The target area is an area generated with the target position as the center.
For example, a region with a fixed length and width may be generated by centering on the target position, as shown by the dashed box in fig. 4; alternatively, a circular region with a fixed radius may be generated with the target position as the center; alternatively, the target position may be centered on the region of another shape.
It should be noted that the area size of the area may be a default size of the system, or may also be customized by the user, for example, the area is set as a circular area with a diameter of 1cm and the target position is set as the center, which is not limited in the present application.
The contour of the object is the contour of an object on the target image, wherein the object can be any visible content on the target image, such as hair, five sense organs, acne on face, clothes and the like of a person on the target image shown in fig. 3.
In the actual implementation, when the user clicks the target position on the target image as shown in fig. 3, the first input may be implemented.
The terminal generates a region with a fixed length and width, i.e., a target region, as shown in fig. 4, centering on the target position clicked by the user in response to the first input.
The terminal carries out image recognition on the image in the target area so as to detect whether the object outline exists in the target area.
In the case that the contour of the object exists in the target area, for example, the terminal detects that there is a contour of a whelk at the upper left corner of the target position clicked by the user, and then displays an enlarged image including the whelk.
Wherein the magnified image includes magnified content of the magnified object.
In some embodiments, the magnified image may also include magnified content of other objects around the outline of the object.
Of course, in other embodiments, the magnified image may also include blank areas.
For example, when the user looks into the mirror with a mobile phone, click the position at the right eye corner of the person on the target image to realize the first input; the terminal responds to the first input, determines a target area by taking the right eye corner as the center, magnifies the outline of the right eye pupil after judging that the outline of the right eye pupil exists in the target area, and simultaneously displays the magnified image of the whole right eye.
It should be noted that the displaying of the magnified image including the object in step 130 of the present application may be performed by displaying the magnified image including the object on the whole screen, or may be performed by displaying the magnified image including the object in a partial area of the screen, or may be performed by displaying the magnified image including the object in a floating or split screen manner.
The following describes an implementation of this step, taking a floating display as an example.
In some embodiments, the displaying in step 130 comprises a magnified image of the object, comprising: on the target image, a magnified image including the object is displayed in suspension.
In this embodiment, the terminal displays the enlarged image while floating in a region near the target position in a floating manner in response to the first input.
As shown in fig. 5, in the case where the terminal displays the image of the person in full size on the entire screen, the enlarged whelk is also displayed in a floating manner above the whelk, as shown in a circular area portion in fig. 5.
The image in the circular area is an amplified image, and the amplified image comprises the amplified content of the whelk and the amplified skin lines around the whelk.
In this embodiment, by simultaneously displaying the normal image including the contour of the object in the target region and the enlarged image including the contour of the object, a comparison reference before and after enlargement can be provided to the user, thereby contributing to an improvement in the user experience.
The inventor finds that, in the related art, after the user clicks the target position, the terminal directly magnifies and displays the image at the target position. In an actual execution process, due to reasons such as inaccurate clicking position of a user or deviation of a touch recognition position, an enlarged image is not an image expected by the user, and thus the use experience of the user is affected.
In the application, after the user performs the first input at the target position, the image at the target position is not simply amplified and displayed, but the target area is generated based on the target position, and the object in the target area is amplified and displayed, so that the object which the user wants to amplify is intelligently judged based on the operation of the user, and the function of amplifying and displaying more accurately is provided.
According to the display method provided by the embodiment of the application, the target area is generated based on the target position of the first input position of the user, and the object existing in the target area is amplified and displayed, so that the object which the user wants to amplify can be intelligently judged based on the operation of the user, the condition that the actual amplification position does not accord with the expected amplification position of the user due to the fact that the click position of the user is not accurate is effectively avoided, a more accurate amplification display function is provided for the user, and the use experience of the user is improved.
In some embodiments, step 130 further comprises: the center of the object is taken as a central point, the object is displayed in an enlarged mode, and the outline of the object is in the enlarged image.
In this embodiment, the center point is the center of magnification during magnification.
The center of the object is taken as a center point, namely the center of the object is taken as an amplification center, and the area where the object is located is amplified.
Wherein the magnified image may include a magnified portion of the object and magnified portions of other images around the outline of the object.
It should be noted that, in this embodiment, the enlarged display object may be expressed as an enlarged display object according to a target magnification, or may also be expressed as an enlarged image according to a target size, and a specific implementation manner will be described in the following embodiments, which is not described herein again.
For example, with continued reference to fig. 5, after the user clicks a position near the whelk to implement the first input, the terminal generates a target region with the position clicked by the user as a center in response to the first input, and in a case where a contour of the whelk exists in the target region, magnifies an image of the region where the whelk is located with the center of the whelk as a center point, generates a magnified image, and displays the magnified image. Wherein the magnified image comprises the magnified whole object of the whelk and the magnified local skin image around the whelk.
According to the display method provided by the embodiment of the application, the center of the object is taken as the center point to amplify the object, and the amplified image including the outline of the object is displayed, so that the full view of the amplified object can be displayed for a user, the amplified and displayed picture is more complete, the influence on the normal use of the user due to incomplete display content is avoided, and the use experience of the user is further improved.
It will be appreciated that in actual implementation, there are instances where one or more contours of objects exist within the target area selected by the user.
The implementation of step 130 is specifically described below from two different implementation perspectives.
First, the target area presents the outline of a single object.
As shown in fig. 2, in some embodiments, step 130 comprises: in the case where the target region includes the outline of the single object, the single object is displayed enlarged with the center of the single object as the center point.
In this embodiment, it can be understood that, after the terminal generates the target area in response to the first input and determines whether the contour of the object exists in the target area, in a case that it is determined that the contour of the object exists in the target area, the contour of the object needs to be further identified to determine the number of contours of the object.
For example, in the case where the terminal recognizes that the target area includes only the outline of a single object, such as only one whelk, the terminal magnifies and displays the image of the area where the whelk is located, with the center of the whelk as the center point.
It should be noted that, in the actual implementation process, image recognition may be implemented through a neural network model, so as to judge the object to which the contour of the object belongs based on the contour of the object, for example, based on the contours of a plurality of objects to be judged, which are respectively determined as the contour of whelk, the contour of corner of eye, the contour of eyebrow, and the like.
The neural network model can be obtained by training by taking the outline of the sample object as a sample and taking the sample object corresponding to the outline of the sample object as a sample label.
Secondly, the target area has the outlines of a plurality of objects.
With continued reference to fig. 2, in some embodiments, step 130 includes:
in the case that the target area includes contours of a plurality of objects, displaying an object contour selection interface including contours of the plurality of objects;
receiving a second input of the user to the contour of the target object in the contours of the plurality of objects;
and in response to the second input, magnifying and displaying the target object by taking the center of the target object as a central point.
In this embodiment, the second input is used to determine a target object contour.
The second input may be a touch input, a physical key input, a voice input, a somatosensory gesture input, and the like, which are the same as the first input, and are not described herein again.
Wherein, the target object is an object which the user wants to enlarge and display.
The object outline interface should include the outline of all objects within the target area.
The object outline selection interface is used for enabling a user to select the outline of the target object required to be displayed in an enlarged mode.
It is understood that the number of the outlines of the target object may be one or more.
In the actual execution process, after the terminal determines that the contour of the object exists in the target area, the number of the contours of the object in the target area is obtained by identifying and distinguishing the contour of the object. The specific implementation process is as described above, and is not described herein again.
And under the condition that the terminal determines that the number of the outlines of the objects in the target area is not less than two, displaying an object outline selection interface, wherein the object outline selection interface comprises the outlines of all the identified objects.
Of course, in other embodiments, the object outline selection interface may further include a selection control corresponding to the outline of each object.
Under the object contour selection interface, the user can realize the second input by clicking the contour of the object to be amplified or the selection control corresponding to the contour of the object.
And the terminal responds to the second input, determines the contour of the object selected by the user as the contour of the target object, and enlarges and displays the area where the target object is located by taking the center of the target object as a central point.
For example, when the user draws an eye line against the display interface, a picture of the user's entire face is included on the display interface.
Under the condition that the user needs to enlarge the eye area, the user clicks the position near the eyes on the target image to realize first input, the terminal responds to the first input, generates the target area by taking the clicked position of the user as the center, and identifies the outline of the object in the target area.
And the terminal identifies a plurality of object outlines including eyes, pouches, eyebrows, whelks and the like in the target area, and displays an object outline selection interface, wherein the object outline display interface includes the outlines of the plurality of objects including the eyes, the pouches, the eyebrows, the whelks and the like.
The user clicks on the outline of the eye to effect the second input.
The terminal determines the contour of the eye as a contour of the target object in response to the second input, and enlarges and displays the area near the eye with the center of the eye as a center point.
Of course, in other embodiments, in the case that the terminal does not recognize that the contour of the object exists in the target area, or in the case that no contour of any object exists in the target area, the area is directly displayed in an enlarged manner with the target position as the center. According to the display method provided by the embodiment of the application, the number of the outlines of the objects in the target area is judged, different displays are realized according to different judgment results, under the condition that the outlines of a plurality of objects exist, the outlines of the target objects are determined through the second input of a user, the target objects are amplified and displayed, and the display method has high flexibility and man-machine interaction, and is favorable for further improving the use experience of the user.
The inventor also finds in the process of research and development that in the related art, the magnifier simulating the mirror is a fixed-size magnifier, and the magnification is also fixed. In the use process, the user not only needs to manually adjust the size of the magnifier based on the selection area, but also the magnifier cannot be adjusted in magnification according to actual conditions, so that the use experience of the user is influenced.
In the present application, during the actual implementation process, the user may set the magnification mode in advance, such as selecting whether to perform magnification based on the target magnification or to perform magnification based on the target size.
According to some embodiments of the application, the method further comprises:
receiving a third input of the user;
in response to a third input, a target amplification mode is entered.
In this embodiment, the third input is used to determine a target magnification mode, wherein the magnification mode includes a target-magnification-based magnification mode and a target-size-based magnification mode.
The third input may be a touch input, a physical key input, a voice input, a somatosensory gesture input, and the like, which are the same as the first input, and will not be described herein.
And under the condition that the user selects to amplify based on the target magnification mode, the terminal automatically amplifies the object according to the amplification mode of the target magnification.
And if the user selects to enlarge based on the target size mode, the terminal automatically enlarges the object according to the enlargement mode of the target size.
The following describes the amplification mode of the embodiment of the present application from two different implementation perspectives.
Firstly, amplifying according to a target magnification.
With continued reference to FIG. 2, in some embodiments, the displaying of the magnified image including the object in step 130 includes: and magnifying the display object according to the target magnification, wherein the target magnification is preset.
In this embodiment, the target magnification is a magnification desired by the user, and may be 2 times, 3 times, or other magnification.
Wherein the magnification is a ratio between the size of the magnified object and the size of the actual object.
The target multiplying power can be a default multiplying power of the system or a user-defined multiplying power.
Based on the target magnification and the actual size of the object, the size of the target image can then be determined.
The size of the target image is the size of the magnifying glass, for example, in the case of displaying the magnified image in a full-screen mode, the target size is the size of the whole display screen; in the case of displaying the enlarged image in the form of a hover as shown in fig. 5, the target size is the size of the circular hover region in fig. 5.
In general, the size of the target image should be greater than or equal to the size of the object enlarged by the target magnification, so as to better display the overall appearance of the enlarged object.
It can be understood that, in the present embodiment, the magnification is fixed, and the magnification can be adjusted based on the requirement of the user; while the size of the magnified image may be non-fixed and may vary based on changes in the object.
For example, in the case that the user sets the target magnification to 2.5 times, the terminal may calculate an enlarged size of the whelk after 2.5 times enlargement to 2.5a based on the actual size a of the whelk and display the whelk after 2.5 times enlargement when performing step 130.
It should be noted that, during the display process, a partial region may be appropriately reserved in the magnified image to display the entire appearance of the whelk, for example, the magnified image is determined to have a size of (4/3) × 2.5a based on a size of 2.5 a.
In this embodiment, the terminal may adjust the enlarged image to the most appropriate size for display based on the target magnification that the user wants to enlarge.
And secondly, amplifying according to the target size.
With continued reference to FIG. 2, in some embodiments, the displaying of the magnified image including the object in step 130 includes: the magnified image is displayed in a target size, and the magnification is determined based on the target size and the size of the object.
In this embodiment, the target size is the size of the magnified image that the user wants to display, i.e., the size of the magnifying glass that the user wants.
The target size may be a default size for the system or may be user-defined, such as 1/3 size for the screen display area or 1/5 size for the screen display area.
Based on the target size and the size of the object, the magnification of the object may be determined.
It will be appreciated that in this embodiment, the size of the magnified image is fixed, and that this size can be set based on the needs of the user; and the magnification may be non-fixed and may be varied based on changes in the subject.
For example, in the case where the user sets the target size to b size, the terminal scales based on the actual size a of the whelk and the target size b, thereby determining that the magnification should not exceed b/a.
In practical implementation, the magnification can be determined to be a value slightly smaller than b/a, so as to reserve a part of space in the magnified image to show the whole appearance of the whelk.
In this embodiment, the terminal may automatically adjust the magnification of the contour of the object based on the target size of the enlarged image that the user wants to display, to achieve the optimal magnification.
According to the display method provided by the embodiment of the application, the user can select the most appropriate amplification mode based on actual needs by providing various different amplification modes such as target magnification based on target size amplification and target magnification based on target size amplification, the terminal can automatically adjust the magnification to the optimal magnification or adjust the size of the amplified image to the optimal size based on the selection of the user, the flexibility is high, the universality is strong, and the use experience of the user is remarkably improved.
In the display method provided in the embodiment of the present application, the execution main body may be a display device, or a control module for executing the display method in the display device. In the embodiment of the present application, a method for displaying a simulated mirror by a display device is taken as an example, and the device for displaying a simulated mirror provided in the embodiment of the present application is described.
The embodiment of the application also provides a display device.
As shown in fig. 6, the display device includes: a first display module 610, a first receiving module 620, and a second display module 630.
The first display module 610 is configured to display a target image on a display interface, where the target image is acquired by a camera of the electronic device;
a first receiving module 620, configured to receive a first input of a target position of a target image by a user;
a second display module 630, configured to display, in response to the first input, an enlarged image including the object in a case where a target area of the target image includes an outline of the object, the target area being determined based on the target position.
According to the display device provided by the embodiment of the application, the target area is generated based on the target position of the first input position of the user, and the object existing in the target area is amplified and displayed, so that the object which the user wants to amplify can be intelligently judged based on the operation of the user, the condition that the actual amplification position does not accord with the expected amplification position of the user due to the fact that the click position of the user is not accurate is effectively avoided, a more accurate amplification display function is provided for the user, and the use experience of the user is further improved
In some embodiments, the second display module is further configured to:
and magnifying and displaying the object by taking the center of the object as a central point, wherein the outline of the object is in the magnified image.
In some embodiments, in the case that the target region includes an outline of a single object, the second display module 630 is further configured to display the single object in an enlarged manner with a center of the single object as a center point.
In some embodiments, the apparatus further comprises a second receiving module and a third display module that, in the case where the target area comprises outlines of a plurality of objects,
the third display module is used for displaying an object contour selection interface, and the object contour selection interface comprises contours of a plurality of objects;
the second receiving module is used for receiving a second input of the user to the contour of the target object in the contours of the plurality of objects;
the second display module 630 is further configured to, in response to a second input, enlarge and display the target object with the center of the target object as a center point.
In some embodiments, the second display module 630 is further configured to:
on the target image, a magnified image including the object is displayed in suspension.
In some embodiments, the second display module 630 is further configured to: and magnifying the display object according to the target magnification, wherein the target magnification is preset.
In some embodiments, the second display module 630 is further configured to: the magnified image is displayed in a target size, and the magnification is determined based on the target size and the size of the object.
According to the display device provided by the embodiment of the application, through providing multiple different amplification modes such as target magnification based amplification and target size based amplification, the user can select the most suitable amplification mode based on actual needs, the terminal can automatically adjust the magnification to the optimal magnification or adjust the size of an amplified image to the optimal size based on the selection of the user, the flexibility is high, the universality is strong, and the use experience of the user is remarkably improved
The display device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an IOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The display device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 5, and is not described here again to avoid repetition.
Optionally, as shown in fig. 7, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor 701, a memory 702, and a program or an instruction stored in the memory 702 and executable on the processor 701, where the program or the instruction is executed by the processor 701 to implement each process of the display method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 810.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 810 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 8 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The display unit 806 is configured to display a target image on the display interface, where the target image is acquired by a camera of the electronic device;
a user input unit 807 for receiving a first input of a target position of the target image by a user;
the display unit 806 is further configured to display, in response to the first input, an enlarged image including the object in a case where a target region of the target image includes an outline of the object, the target region being determined based on the target position.
According to the electronic equipment provided by the embodiment of the application, the target area is generated based on the target position of the first input position of the user, and the object existing in the target area is amplified and displayed, so that the object which the user wants to amplify can be intelligently judged based on the operation of the user, the condition that the actual amplification position does not accord with the expected amplification position of the user due to the fact that the click position of the user is inaccurate is effectively avoided, a more accurate amplification display function is provided for the user, and the use experience of the user is further improved.
Optionally, the display unit 806 is further configured to: the center of the object is taken as a central point, the object is displayed in an enlarged mode, and the outline of the object is in the enlarged image.
Optionally, the display unit 806 is further configured to: in the case where the target region includes the outline of the single object, the single object is displayed enlarged with the center of the single object as the center point.
Alternatively, in the case where the target region includes contours of a plurality of objects,
a display unit 806, further configured to display an object contour selection interface, where the object contour selection interface includes contours of a plurality of objects;
a user input unit 807 for receiving a second input of the user to the contour of the target object among the contours of the plurality of objects;
a display unit 806, further configured to: and in response to the second input, magnifying and displaying the target object by taking the center of the target object as a central point.
Optionally, the display unit 806 is further configured to: on the target image, a magnified image including the object is displayed in suspension.
Optionally, the display unit 806 is further configured to: and magnifying the display object according to the target magnification, wherein the target magnification is preset.
Optionally, the display unit 806 is further configured to: the magnified image is displayed in a target size, and the magnification is determined based on the target size and the size of the object.
According to the electronic equipment provided by the embodiment of the application, through providing multiple different amplification modes such as target magnification based amplification and target size based amplification, the user can select the most appropriate amplification mode based on actual needs, the terminal can automatically adjust the magnification to the optimal magnification or adjust the size of an amplified image to the optimal size based on the selection of the user, the flexibility is high, the universality is strong, and the use experience of the user is remarkably improved.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 809 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 810 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 810.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the display method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing display method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A display method, comprising:
displaying a target image on a display interface, wherein the target image is acquired by a camera of electronic equipment;
receiving a first input of a target position of the target image by a user;
in response to the first input, displaying an enlarged image including an object in a case where a target region of the target image includes an outline of the object, the target region being determined based on the target position.
2. The method according to claim 1, wherein the displaying the enlarged image including the object in a case where the target region of the target image includes an outline of the object, includes: and magnifying and displaying the object by taking the center of the object as a central point, wherein the outline of the object is in the magnified image.
3. The display method according to claim 2, wherein the enlarging and displaying the object with the center of the object as a central point comprises:
in the case that the target region includes an outline of a single object, magnifying and displaying the single object with a center of the single object as a center point;
in a case where the target region includes contours of a plurality of objects, displaying an object contour selection interface including the contours of the plurality of objects;
receiving a second input of a user to a contour of a target object in the contours of the plurality of objects;
and responding to the second input, and magnifying and displaying the target object by taking the center of the target object as a central point.
4. The display method according to claim 1, wherein the displaying includes a magnified image of the object, including:
displaying, in suspension, on the target image, a magnified image including the object.
5. The display method according to any one of claims 1 to 4, wherein the displaying includes a magnified image of the object, including:
magnifying and displaying the object according to a target magnification, wherein the target magnification is preset;
alternatively, the magnified image is displayed in a target size, and the magnification is determined based on the target size and the size of the object.
6. A display device, comprising:
the first display module is used for displaying a target image on a display interface, wherein the target image is acquired by a camera of the electronic equipment;
the first receiving module is used for receiving first input of a target position of the target image by a user;
a second display module to display, in response to the first input, an enlarged image including an object in a case where a target area of the target image includes an outline of the object, the target area being determined based on the target position.
7. The display device according to claim 6, wherein the second display module is further configured to:
and magnifying and displaying the object by taking the center of the object as a central point, wherein the outline of the object is in the magnified image.
8. The display device according to claim 7, further comprising: the second receiving module and the third display module;
in the case where the target region includes a single object outline, the second display module is further to: magnifying and displaying the single object by taking the center of the single object as a central point;
in a case where the target region includes a plurality of object contours, the third display module is to: displaying an object outline selection interface, the object outline selection interface comprising outlines of the plurality of objects;
the second receiving module is used for receiving a second input of the user to the contour of the target object in the contours of the plurality of objects;
the second display module is further configured to, in response to the second input, enlarge and display the target object with the center of the target object as a central point.
9. The display device according to claim 6, wherein the second display module is further configured to:
displaying, in suspension, on the target image, a magnified image including the object.
10. The display device according to any one of claims 6-9, wherein the second display module is further configured to:
magnifying and displaying the object according to a target magnification, wherein the target magnification is preset;
alternatively, the magnified image is displayed in a target size, and the magnification is determined based on the target size and the size of the object.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the display method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the display method according to any one of claims 1 to 5.
CN202111407015.6A 2021-11-24 2021-11-24 Display method, display device and electronic equipment Pending CN114063845A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111407015.6A CN114063845A (en) 2021-11-24 2021-11-24 Display method, display device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111407015.6A CN114063845A (en) 2021-11-24 2021-11-24 Display method, display device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114063845A true CN114063845A (en) 2022-02-18

Family

ID=80275853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111407015.6A Pending CN114063845A (en) 2021-11-24 2021-11-24 Display method, display device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114063845A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500855A (en) * 2022-02-28 2022-05-13 维沃移动通信有限公司 Display method, display device, electronic apparatus, and readable storage medium
CN114546576A (en) * 2022-02-28 2022-05-27 维沃移动通信有限公司 Display method, display device, electronic apparatus, and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114500855A (en) * 2022-02-28 2022-05-13 维沃移动通信有限公司 Display method, display device, electronic apparatus, and readable storage medium
CN114546576A (en) * 2022-02-28 2022-05-27 维沃移动通信有限公司 Display method, display device, electronic apparatus, and readable storage medium

Similar Documents

Publication Publication Date Title
CN109087239B (en) Face image processing method and device and storage medium
CN114063845A (en) Display method, display device and electronic equipment
CN114546212B (en) Method, device and equipment for adjusting interface display state and storage medium
CN112162685B (en) Attribute adjusting method and device and electronic equipment
CN112099684A (en) Search display method and device and electronic equipment
JP2024501558A (en) Display control methods, devices, electronic devices and media
CN113655929A (en) Interface display adaptation processing method and device and electronic equipment
CN113570609A (en) Image display method and device and electronic equipment
CN112905134A (en) Method and device for refreshing display and electronic equipment
WO2022068725A1 (en) Navigation gesture setting method and apparatus, and electronic device
CN115357158A (en) Message processing method and device, electronic equipment and storage medium
CN111857507A (en) Desktop image processing method and device and electronic equipment
WO2023030238A1 (en) Secure input method and apparatus
CN112765946B (en) Chart display method and device and electronic equipment
CN112286430B (en) Image processing method, apparatus, device and medium
CN112162689B (en) Input method and device and electronic equipment
CN114879872A (en) Display method, display device, electronic equipment and storage medium
CN114242023A (en) Display screen brightness adjusting method, display screen brightness adjusting device and electronic equipment
CN113986428A (en) Picture correction method and device and electronic equipment
CN109254712B (en) Information processing method and electronic equipment
CN113296661A (en) Image processing method and device, electronic equipment and readable storage medium
CN113157184A (en) Content display method and device, electronic equipment and readable storage medium
CN104750803B (en) The searching method and device of a kind of intelligent terminal
CN113076010B (en) Input method, input device, electronic equipment and medium
CN114143454B (en) Shooting method, shooting device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination