CN114115617A - Display method applied to electronic equipment and electronic equipment - Google Patents

Display method applied to electronic equipment and electronic equipment Download PDF

Info

Publication number
CN114115617A
CN114115617A CN202010880184.0A CN202010880184A CN114115617A CN 114115617 A CN114115617 A CN 114115617A CN 202010880184 A CN202010880184 A CN 202010880184A CN 114115617 A CN114115617 A CN 114115617A
Authority
CN
China
Prior art keywords
image
user
makeup
electronic device
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010880184.0A
Other languages
Chinese (zh)
Other versions
CN114115617B (en
Inventor
高凌云
罗红磊
刘海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010880184.0A priority Critical patent/CN114115617B/en
Priority to PCT/CN2021/108283 priority patent/WO2022042163A1/en
Publication of CN114115617A publication Critical patent/CN114115617A/en
Application granted granted Critical
Publication of CN114115617B publication Critical patent/CN114115617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard

Abstract

The embodiment of the application discloses a display method applied to electronic equipment and the electronic equipment, and relates to the technical field of terminals. The method comprises the following steps: a first area of a display screen of the electronic equipment displays a user face image acquired by a camera; a second area of the display screen displays a user face simulation makeup image generated according to the user face image; a first input is received and in response to the first input, an enlarged image of at least a portion of the user's face image is displayed in a first area and an enlarged image of at least a portion of the user's face simulated cosmetic image is displayed in a second area. Thus, the user can make up by referring to the face simulation makeup image of the user; the user face image and the user face simulated makeup image can be displayed in an enlarged mode, so that the image can be checked more conveniently, and the user can draw exquisite makeup conveniently.

Description

Display method applied to electronic equipment and electronic equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a display method applied to an electronic device and an electronic device.
Background
With the increasing functionality of mobile phones, mobile phones are used in many ways. For example, the cell phone may act as a mirror. For another example, the mobile phone can be used as a makeup box to display a makeup and beauty picture of the user in real time, and conduct makeup instruction for the user. When a user uses a mobile phone to assist in makeup, how to improve user experience brings all-around auxiliary guidance for the user, meets diversified requirements of the user, and is the direction of effort.
Disclosure of Invention
The application provides a display method applied to electronic equipment and the electronic equipment, which can assist a user in making up in an all-around manner and facilitate the user to draw a delicate makeup.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, the present application provides a display method applied to an electronic device, where the method may include: a first area of a display screen of the electronic equipment displays a user face image acquired by a camera; a second area of the display screen of the electronic equipment displays a user face simulation makeup image generated according to the user face image; receiving a first input; in response to a first input, an enlarged image of at least a portion of the user's facial image is displayed in the first region.
In this approach, only the user's facial image may be enlarged to facilitate the user's viewing of facial details. The user face image and the user face simulation makeup image can be simultaneously amplified, so that the user can conveniently check the details of the face and the details of the simulation makeup image, and the user can conveniently compare the user face image with the user face simulation makeup image.
With reference to the first aspect, in one possible design, receiving the first input includes: receiving a first input acting on a first area; or receiving a first input acting on the second area.
With reference to the first aspect, in one possible design, the method further includes: in response to the first input, a magnified image of at least a portion of the user's face simulating the cosmetic image is displayed in the second area. In the method, a user face image and a user face simulated cosmetic image are simultaneously enlarged in response to a first input. In one possible design, the user face image magnification effect displayed in the first area and the user face simulated makeup image magnification effect displayed in the second area are displayed synchronously.
With reference to the first aspect, in one possible design, displaying an enlarged image of at least a part of the user's face image in the first area includes: displaying an enlarged image of at least a part of the user face image in the first area with a center point of the user face image as a center; displaying a magnified image of at least a portion of the user's face simulated cosmetic image in the second area includes: and displaying the enlarged image of at least part of the user face simulation makeup image in the second area by taking the central point of the user face simulation makeup image as the center.
With reference to the first aspect, in one possible design, if the first input is applied to a first position of a display screen of the electronic device, displaying a first image in a first area, and displaying a simulated cosmetic image of the first image in a second area; the first image is an enlarged image of a part of the user's face image; if the first input acts on a second position of the display screen of the electronic equipment, displaying a second image in the first area, and displaying a simulated makeup image of the second image in the second area; the second image is an enlarged image of a part of the user's face image; wherein the first position is different from the second position, and the first image is different from the second image. That is, the partial user face image and the partial user face simulated cosmetic image are determined based on a position at which the first input is applied to the display screen of the electronic device. And determining an image area which needs to be enlarged and displayed by the user according to the first input, and enlarging and displaying the image by taking the area as a center. In one possible design, as the user's face moves, the center of the partial user's face image and the partial user's face simulated cosmetic image do not move.
With reference to the first aspect, in one possible design, the partial user face image corresponds to a partial user face simulated cosmetic image. That is, the enlarged user face image and the enlarged user face simulated cosmetic image are the same part of the user face. Therefore, the user can compare the user face image with the user face simulation makeup image more conveniently.
With reference to the first aspect, in one possible design, the face simulation makeup image of the user includes first indication information thereon, the first indication information indicating a makeup part and a shape; the first indication information is enlarged as the user's face simulates enlargement of the cosmetic image. For example, the first indication information is a dashed box. Therefore, more comprehensive makeup instruction can be provided for the user, and the user can make up conveniently according to the instruction.
With reference to the first aspect, in one possible design, if an object image that is hidden on the face of the user is superimposed on the face image of the user, the object image that is hidden on the face of the user on the simulated makeup image of the face of the user does not display the simulated makeup image. This may bring an immersive experience to the user.
With reference to the first aspect, in one possible design, the displaying, by a first region of a display screen of an electronic device, a facial image of a user captured by a camera includes: a first area of a display screen of the electronic equipment displays a first static image acquired according to a user face image acquired by a camera; the second area of the display screen of the electronic equipment displays the user face simulation makeup-carrying image generated according to the user face image, and the second area comprises: the second area of the display screen of the electronic equipment displays a second static image formed by simulating makeup for the face image of the user. In the method, the electronic equipment can amplify and display the user face image and the user face simulation makeup image which are solidified and displayed, so that the user can conveniently and carefully check.
With reference to the first aspect, in one possible design, a third area of the display screen of the electronic device displays the first information; the electronic equipment receives input operation of a user on first information; and changing the simulated makeup image of the face of the user according to the first information. That is, the electronic device may provide a plurality of sets of makeup parameters, and the user may select different makeup parameters to form different user face simulation makeup images.
In one possible design, the first information is generated from features of an image of a face of the user. In one possible design, the first information is generated from a picture. In one possible design, the first information is generated from a makeup of an image of the face of the user. In one possible design, receiving modification operation of a user on a face simulation makeup image of the user; and generating first information according to the modified user face simulation makeup image.
In the method, the beauty treatment parameters can be preset by the electronic equipment, can also be set according to the characteristics of the user, and can also be modified by the user.
In a second aspect, the present application provides a display method applied to an electronic device, which may include: starting a first application; displaying a user face image acquired by electronic equipment in a first area of a display screen of the electronic equipment; displaying a user face simulation makeup image generated according to the user face image in a second area of the display screen of the electronic equipment; receiving a first operation; stopping displaying the user face image in response to the first operation; a first object is displayed in a first area, the first object being a still image or a short video acquired from an image of a user's face. In the method, the dynamically displayed face image of the user can be solidified and displayed as a static picture or a short video, so that the user can conveniently view the face makeup.
With reference to the second aspect, in one possible design, the display of the user face simulation makeup image is stopped in response to the first operation; and displaying a second object in a second area, wherein the second object is a static image or a short video acquired according to the face simulation makeup image of the user. In the method, the dynamically displayed user face simulation makeup image can be solidified and displayed as a static picture or a short video, so that the user can conveniently check the makeup effect.
In a third aspect, the present application provides an electronic device comprising: display screen, camera, input device and processor. The camera is used for collecting a face image of a user; the first area of the display screen is used for displaying a user face image acquired by the camera; the second area of the display screen is used for displaying the user face simulation makeup image generated according to the user face image; the input device is used for receiving a first input; the processor is configured to control a first area of the display screen to display an enlarged image of at least a portion of the user's facial image in response to a first input.
With reference to the third aspect, in one possible design, the input device receiving the first input includes: the input device receives a first input acting on the first area; or the input device receives a first input acting on the second area.
In one possible design, in combination with the third aspect, the processor is further configured to control the second area of the display screen to display a magnified image of at least a portion of the simulated cosmetic image of the user's face in response to the first input.
With reference to the third aspect, in one possible design, the processor is specifically configured to: controlling a first area of a display screen to display an enlarged image of at least a part of the user face image with a center point of the user face image as a center; and controlling a second area of the display screen to display an enlarged image of at least part of the user face simulation makeup image by taking the central point of the user face simulation makeup image as a center.
With reference to the third aspect, in one possible design, the processor is further configured to control the second area of the display screen to display the first image if it is determined that the first input is applied to the first position of the display screen; if the first input is determined to act on the second position of the display screen of the electronic equipment, controlling a second area of the display screen to display a second image; wherein the first position is different from the second position, and the first image is different from the second image.
In one possible design, the partial user face image corresponds to a partial user face simulated cosmetic image.
With reference to the third aspect, in one possible design, the face simulation makeup image of the user includes first indication information thereon, the first indication information indicating a makeup part and a shape; the first indication information is enlarged as the user's face simulates enlargement of the cosmetic image.
With reference to the third aspect, in one possible design, the processor is further configured to: acquiring a first static image according to a user face image acquired by a camera; simulating the facial image of the user with makeup to form a second static image; the display screen is also used for: displaying a first still image in a first area; displaying the second still image in a second area.
With reference to the third aspect, in one possible design, a third area of the display screen is used for displaying the first information; the input device is also used for receiving the input operation of the user on the first information; the processor is further configured to change the user's face simulation cosmetic image according to the first information.
With reference to the third aspect, in one possible design, the processor is further configured to generate the first information according to a feature of a face image of the user.
With reference to the third aspect, in one possible design, the input device is further configured to receive a modification operation of the user on the user face simulation makeup image; the processor is further configured to generate first information from the modified user face simulation cosmetic image.
In a fourth aspect, the present application provides a method for displaying a graphical user interface, the method comprising: the electronic device displays a first Graphical User Interface (GUI); the first region of the first GUI includes a user's facial image captured by a camera; the second region of the first GUI includes a user face simulation cosmetic image generated from the user face image; in response to receiving the first input, the electronic device displays a second GUI; the first region of the second GUI comprises a magnified image of at least a portion of the user's facial image; the first area of the second GUI and the first area of the first GUI are the same display area on the display screen.
With reference to the fourth aspect, in one possible design, the second area of the second GUI includes a magnified image of at least a portion of the user's face simulating a cosmetic image; and the second area of the second GUI and the second area of the first GUI are the same display area on the display screen.
With reference to the fourth aspect, in one possible design, a center point of the user face image coincides with a center point of a part of the user face image; the center point of the user face simulation makeup image coincides with the center point of part of the user face simulation makeup image.
With reference to the fourth aspect, in one possible design, if the first input is applied to a first position of a display screen of the electronic device, the first area of the second GUI displays a first image, and the second area of the second GUI displays a simulated cosmetic image of the first image; the first image is an enlarged image of a portion of the user's facial image; if the first input acts on a second position of the display screen of the electronic equipment, displaying a second image in a first area of a second GUI, and displaying a simulated makeup image of the second image in a second area of the second GUI; the second image is an enlarged image of a part of the user's face image; wherein the first position is different from the second position, and the first image is different from the second image.
In combination with the fourth aspect, in one possible design, the partial user face image corresponds to a partial user face simulated cosmetic image.
With reference to the fourth aspect, in one possible design, the second area of the first GUI further includes first indication information for indicating the makeup part and the shape; the second area of the second GUI further includes first indication information displayed enlarged as the user's face simulates enlargement of the cosmetic image.
With reference to the fourth aspect, in one possible design, if the first area of the first GUI includes an object image that is hidden on the face of the user, the object image that is hidden on the face of the user in the second area of the first GUI does not display the simulated cosmetic image.
With reference to the fourth aspect, in one possible design, the first area of the first GUI includes a first still image obtained from an image of a face of the user captured by a camera; the second region of the first GUI includes a second still image formed by simulating makeup for the face image of the user.
With reference to the fourth aspect, in one possible design, the third area of the first GUI includes the first information, and the method further includes: receiving input operation of a user on first information; responding to the input operation of the user on the first information, and displaying a third GUI by the electronic equipment; the second area of the third GUI includes a user face simulation makeup image formed according to the first information; and the second area of the third GUI and the second area of the first GUI are the same display area on the display screen.
In a fifth aspect, the present application provides an electronic device, which may implement the display method applied to the electronic device described in any one of the first aspect and the second aspect and possible design manners thereof, and may implement the method by software, hardware, or by executing corresponding software by hardware. In one possible design, the electronic device may include a processor and a memory. The processor is configured to enable the electronic device to perform the corresponding functions of any one of the first and second aspects and possible designs thereof. The memory is for coupling with the processor and holds the necessary program instructions and data for the electronic device.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium, which includes computer instructions, when the computer instructions are executed on an electronic device, the electronic device is caused to perform the display method applied to the electronic device as described in any one of the first aspect and the second aspect and possible design manners thereof.
In a seventh aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to execute the display method applied to the electronic device as described in any one of the first and second aspects and possible design manners thereof.
For technical effects brought by the electronic device of the third aspect, the GUI display method of the fourth aspect, the electronic device of the fifth aspect, the computer-readable storage medium of the sixth aspect, and the computer program product of the seventh aspect, reference may be made to technical effects brought by the above corresponding methods, which are not described herein again.
Drawings
Fig. 1 is a schematic view of an example of a scene of a display method applied to an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic view of an example of a scene of a display method applied to an electronic device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a schematic view of an example of a scene of a display method applied to an electronic device according to an embodiment of the present application;
fig. 7A to fig. 7D are schematic diagrams illustrating example scenes of a display method applied to an electronic device according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a display method applied to an electronic device according to an embodiment of the present disclosure;
fig. 9 is a schematic view of an example of a scene of a display method applied to an electronic device according to an embodiment of the present application;
fig. 10A is a flowchart illustrating a display method applied to an electronic device according to an embodiment of the present disclosure;
10B-10D are schematic diagrams illustrating example scenes of a display method applied to an electronic device according to an embodiment of the present disclosure;
fig. 11A is a schematic flowchart of a display method applied to an electronic device according to an embodiment of the present disclosure;
11B-11D are schematic diagrams illustrating example scenes of a display method applied to an electronic device according to an embodiment of the present disclosure;
12-18 are schematic diagrams illustrating an example of a scene of a display method applied to an electronic device according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terminology used in the following examples is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one, two or more. The term "and/or" is used to describe an association relationship that associates objects, meaning that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless otherwise noted.
The embodiment of the application provides a display method applied to electronic equipment, which can show a makeup picture of a user, guide the user to make up beautifully and provide a makeup auxiliary function and help the user to finish makeup. The electronic device may include a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a Personal Digital Assistant (PDA), a wearable device, a virtual reality device, and the like, which is not limited in this embodiment.
The mobile phone 100 is taken as the electronic device. Referring to fig. 1, the display interface of the mobile phone 100 may include a first display content 11 and a second display content 12. The first display content 11 is used for displaying a user face image acquired by a mobile phone camera; the second display content 12 is used for displaying the user face makeup image which is acquired by the camera and is used for superposing the user face makeup image simulating the makeup effect. The user can make up with reference to the second display content 12 and can also correct the makeup of the face of the user in real time by comparing the first display content 11 (the face image of the user) with the second display content 12 (the face makeup image of the user). As shown in fig. 1, the embodiment of the present application does not limit the positions where the first display content 11 and the second display content 12 are displayed on the display screen (also referred to as a screen in the present application) of the mobile phone 100. In some embodiments, the display interface of the handset 100 also includes third display content 13. The third display content 13 is used to display makeup auxiliary information. In some examples, makeup assistant information may include a makeup pan 131, the makeup pan 131 including makeup parameters. The user may select different cosmetic parameters to cause the second display content 12 (the user's facial makeup image) to present a corresponding effect. The makeup assistant information may further include makeup step guide information 132, and the makeup step guide information 132 is used to indicate a makeup step. In the embodiment of the present application, the display area of the first display content 11 on the display screen is referred to as a first display frame, and the display area of the second display content 12 on the display screen is referred to as a second display frame.
The electronic device may be a mobile phone 200 having a foldable screen. The cell phone 200 may be folded along one or more folding axes for use. As shown in fig. 2, the screen (folding screen) of the mobile phone 200 is divided into a first display area 21 and a second display area 22 along a folding axis, and an included angle α is formed between the first display area 21 and the second display area 22, where α is located in an interval of 0 ° to 180 °. It will be appreciated that the first display area 21 and the second display area 22 are different areas belonging to the same folding screen. In some embodiments, the first display content 11 and the second display content 12 are displayed in the first display area 21 and the second display area 22, respectively; in other embodiments, both the first display content 11 and the second display content 12 are displayed in the first display area 21. In some embodiments, the first display area 21 displays the first display content 11 and the second display content 12; the second display area 22 displays the third display content 13.
According to the display method applied to the electronic device, the display screen of the electronic device can display the first display content 11 and/or the second display content 12. The user can view the current makeup progress in real time through the first display content 11, can see the complete makeup image through the second display content 12, and can also compare the first display content 11 (user face image) and the second display content 12 (makeup image). The user can also make up according to the makeup step guide information of the third display contents 13; makeup parameters may also be selected to modify the image effect of the second display content 12. Therefore, the makeup tool can bring all-around makeup assistance to a user and help the user draw exquisite makeup.
Referring to fig. 3, a schematic structural diagram of an electronic device 300 is shown.
The electronic device 300 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 300. In other embodiments of the present application, electronic device 300 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 300. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180L, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180L via an I2C interface, such that the processor 110 and the touch sensor 180L communicate via an I2C bus interface to implement the touch functionality of the electronic device 300.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 300. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 300, and may also be used to transmit data between the electronic device 300 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 300. In other embodiments of the present application, the electronic device 300 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 300. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 300 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 300. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 300, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 300 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 300 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 300 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1. In the embodiment of the present application, the display screen 194 may include a display and a Touch Panel (TP). The display is used to output display content to a user and the touch device is used to receive touch events input by the user on the display screen 194. In the embodiment of the present application, the display screen 194 may be used to display the first display content 11, the second display content 12, the third display content 13, and the like.
The electronic device 300 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, electronic device 300 may include 1 or N cameras 193, N being a positive integer greater than 1. In one example, cameras 193 can include wide-angle cameras, photographic cameras, 3D depth-sensing cameras (e.g., structured light cameras, time-of-flight (ToF) cameras), tele-cameras, and the like. In some embodiments, camera 193 may include a front camera and a rear camera. In the embodiment of the present application, the camera 193 (e.g., a front camera) may be used to capture an image of the face of the user.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 300 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. In this way, the electronic device 300 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent recognition of the electronic device 300, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 300. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The data storage area may store data (e.g., audio data, phone book, etc.) created during use of the electronic device 300, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 300 may implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 300 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic device 300 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 300 may be provided with at least one microphone 170C. In other embodiments, the electronic device 300 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 300 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and the like.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 300 determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 300 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 300 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The air pressure sensor 180B is used to measure air pressure. In some embodiments, electronic device 300 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180B.
The gyro sensor 180C may be used to determine the motion attitude of the electronic device 300. In some embodiments, the angular velocity of electronic device 300 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180C. The gyro sensor 180C may be used to photograph anti-shake. For example, when the shutter is pressed, the gyro sensor 180C detects a shake angle of the electronic device 300, calculates a distance to be compensated for the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 300 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180C may also be used for navigation, somatosensory gaming scenes.
The acceleration sensor 180D may detect the magnitude of acceleration of the electronic device 300 in various directions (typically three axes). The magnitude and direction of the gravitational acceleration may be detected when the electronic device 300 is stationary.
The magnetic sensor 180E includes a hall sensor, a magnetometer, and the like. The Hall sensor can detect the direction of the magnetic field; magnetometers are used to measure the magnitude and direction of magnetic fields. The magnetometer can measure the ambient magnetic field strength, for example, the magnetometer can be used to measure the magnetic field strength so as to obtain azimuth information of the carrier of the magnetometer.
The touch device 180F may be used to detect a touch position of the user. In some embodiments, a touch point of the user on the electronic device 300 may be detected through the touch device 180F, and then a grip posture of the user is determined using a preset grip algorithm according to the touch position.
And a distance sensor 180G for measuring a distance. The electronic device 300 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, the electronic device 300 may utilize the range sensor 180G to range for fast focus.
The proximity light sensor 180H may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 300 emits infrared light to the outside through the light emitting diode. The electronic device 300 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 300. When insufficient reflected light is detected, the electronic device 300 may determine that there are no objects near the electronic device 300. The electronic device 300 can utilize the proximity light sensor 180H to detect that the user holds the electronic device 300 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180H may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180M is used to sense the ambient light level. The electronic device 300 may adaptively adjust the brightness of the display screen 194 based on the perceived ambient light level. The ambient light sensor 180M may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180M may also cooperate with the proximity light sensor 180H to detect whether the electronic device 300 is in a pocket to prevent accidental touches.
The fingerprint sensor 180J is used to collect a fingerprint. The electronic device 300 may utilize the collected fingerprint characteristics to implement fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 180K is used to detect temperature. In some embodiments, the electronic device 300 implements a temperature processing strategy using the temperature detected by the temperature sensor 180K. For example, when the temperature reported by the temperature sensor 180K exceeds a threshold, the electronic device 300 performs a reduction in performance of a processor located near the temperature sensor 180K, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 300 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 300 to shut down abnormally. In other embodiments, when the temperature is below a further threshold, the electronic device 300 performs a boost on the output voltage of the battery 142 to avoid an abnormal shutdown due to low temperature.
The touch sensor 180L is also referred to as a "touch panel". The touch sensor 180L may be disposed on the display screen 194, and the touch sensor 180L and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180L is used to detect a touch operation applied thereto or thereabout. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180L may be disposed on the surface of the electronic device 300 at a different position than the display screen 194.
The bone conduction sensor 180Q may acquire a vibration signal. In some embodiments, the bone conduction sensor 180Q may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180Q may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180Q may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180Q, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180Q, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic device 300 may receive key inputs, generate signal inputs related to user settings and function control of the electronic device 300.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic device 300 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 300 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 300 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300.
The software system of the electronic device 300 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes a layered architecture as an example, and exemplifies a software structure of the electronic device 300. Fig. 4 is a block diagram of a software structure of an electronic device 300 according to an embodiment of the present disclosure.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the electronic device 300 may include an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer.
The application layer may include a series of applications.
As shown in fig. 4, the application program layer may include application programs such as an intelligent makeup box, makeup area layout display, makeup pan layout display, makeup mode switching control, curing and magnification control, makeup mirror light supplement control, and makeup extraction control. Wherein, intelligent makeup box is used for providing supplementary makeup function. And the makeup auxiliary information for the intelligent makeup box is displayed on the layout of the makeup pan and is controlled on the display screen. And extracting the makeup control to analyze the collected facial images of the user and obtain makeup parameters. And the makeup area layout display is used for carrying out simulation makeup processing on the face image of the user to form a makeup image of the face of the user. The makeup color mode switching control is used for controlling the electronic equipment to switch the display mode. The solidification and amplification control module is used for carrying out image solidification display on the dynamic display content of the display interface of the makeup application; and the electronic equipment is also used for controlling the electronic equipment to enlarge and display the display content of the makeup application display interface. And the cosmetic mirror light supplement control is used for controlling the effect of the light supplement lamp displayed in the peripheral area of the display screen of the electronic equipment.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 4, the application framework layer may include an event distribution management service (event manager service, PMS), a timing management service (AMS), a Window Management Service (WMS), a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like, which are not limited in this embodiment.
Wherein the event distribution management service is used for distributing events to the application programs. The power management service may be used to control the lighting or blanking of the screen of the electronic device 300. The timing management service is used for managing a timer, an alarm clock, and the like. The window management service is used to manage the window program. The window management service may obtain the size of the display screen, determine if there is a status bar, lock the screen, intercept the screen, etc. The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures. The phone manager is used to provide communication functions of the electronic device 300. Such as management of call status (including on, off, etc.). The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like. The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: sensor services, surface managers (surface managers), Media Libraries (Media Libraries), gesture recognition engines, face recognition engines, graphics processing engines, graphics tracking engines, natural language recognition engines, and the like.
The sensor service is used to store and process sensor related data. For example, output data of each sensor is provided, and fusion processing or the like is performed on the output data of a plurality of sensors. The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications. The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like. The gesture recognition engine is used for processing gesture recognition related processes. The face recognition engine is used for processing face recognition related data and processes. The graphics processing engine is a drawing engine for graphics, image processing, and the like. The image tracking engine is used for processing images and tracking related processes. The natural language recognition engine is used for supporting processes such as voice recognition and semantic recognition.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In an embodiment of the application, the audio driver is configured to upload speech received by a microphone of the electronic device to a natural language recognition engine. Performing semantic recognition by a natural language recognition engine, and determining the semantic meaning of the acquired voice; and generating a corresponding event according to the semantic meaning of the voice, and reporting the event to an event distribution management service. For example, if the natural language recognition engine determines that the semantic meaning of the voice is 'switching the color makeup mode', a mode switching event is generated; if the semantic meaning of the voice is determined to be 'fixed', a curing event is generated; if the semantic meaning of the voice is determined to be 'amplification', generating a central amplification event; determining the semantic meaning of the speech as "enlarged mouth" generates a trace enlarge event.
The camera drive is used for uploading a user facial image acquired by a camera of the electronic equipment to the face recognition engine. The face recognition engine recognizes the expression of the facial image of the user; and generating a corresponding event according to the expression of the facial image of the user, and reporting the event to an event distribution management service. For example, if the face recognition engine determines that the captured expression is "smiling face", a curing event is generated.
The camera driver can also be used for uploading a hand image of the user collected by a camera of the electronic device to the gesture recognition engine. The gesture recognition engine recognizes a gesture corresponding to the hand image of the user; and generating a corresponding event according to the gesture of the hand image of the user, and reporting the event to an event distribution management service. For example, if the gesture recognition engine determines that the acquired gesture is a "bixin gesture", a curing event is generated; and if the acquisition gesture is determined to be the zooming gesture, generating a central magnification event.
The sensor of the electronic equipment detects the operation of the user on the screen and informs the sensor service through the sensor drive. The sensor service determines the operation gesture of the user on the screen, generates a corresponding event according to the current process, and reports the event to the event distribution management service. For example, if the sensor service determines that a right sliding operation of the user on the screen is detected, a mode switching event is generated; if the action that the user taps the screen is determined to be detected, generating a central amplification event; if the action that the user clicks the amplification button is detected, generating a central amplification event; and if the user double-finger outward expansion operation is detected, generating a tracking amplification event.
And the event distribution management service distributes each event to a corresponding module of the application program layer. For example, the mode switching event is distributed to the makeup color mode switching control module; a curing event, a center magnification event, a trace magnification event, etc. is distributed to the curing and magnification control module.
A display method applied to an electronic device according to an embodiment of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that, the embodiment of the present application is described by taking an electronic device with a foldable screen as an example, and it can be understood that the display method applied to the electronic device provided by the present application is also applicable to an electronic device whose screen cannot be folded.
An electronic device having a folding screen may be folded along one or more folding axes. Referring to fig. 5 (a), the electronic device is in an unfolded state. The user can fold the electronic device shown in (a) of fig. 5 along a folding axis of the electronic device. As shown in fig. 5 (b), after the user folds the electronic apparatus along the folding axis AB, the screen (folding screen) of the electronic apparatus is divided into two display areas, i.e., a first display area 21 and a second display area 22, along the folding axis AB. In the embodiment of the present application, the first display region 21 and the second display region 22 formed after being folded may be displayed as two separate regions. For example, the display interface of the first display area 21 may be referred to as a first display interface, and the display interface of the second display area 22 may be referred to as a second display interface; namely, the display area of the folding screen where the first display interface is located is a first display area 21; the display area of the folding screen where the second display interface is located is a second display area 22. The first display interface and the second display interface are respectively positioned on two sides of the folding shaft. The areas of the first display region 21 and the second display region 22 may be the same or different. The first display area 21 and the second display area 22 may form an included angle α, as shown in fig. 5 (b), and the electronic device is in a bent state. The user may continue to fold the electronic apparatus along the folding axis AB, as shown in fig. 5 (c), with the electronic apparatus in a folded state.
It is understood that the first display area 21 and the second display area 22 may have other names, for example, the first display area 21 is referred to as an a screen of the electronic device, and the second display area 22 is referred to as a B screen of the electronic device; alternatively, the first display area 21 is referred to as an upper screen of the electronic device, and the second display area 22 is referred to as a lower screen of the electronic device; this is not limited in the examples of the present application. In the embodiment of the present application, the first display area 21 of the folding screen is referred to as a first screen, and the second display area 22 of the folding screen is referred to as a second screen; it will be appreciated that the first and second screens are different regions belonging to the same folded screen.
It will be appreciated that the angle a between the first screen 21 and the second screen 22 may lie in the interval 0 to 180. In some embodiments, when the angle α between the first screen 21 and the second screen 22 is less than a first threshold and greater than a second threshold, the electronic device may hover over the plane independently without the need for a stand or a user's hand. Therefore, both hands of the user can be liberated, and the user can conveniently operate the electronic equipment. In one example, the first threshold may be 150 °, the second threshold may be 60 °; in another example, the first threshold may be 120 °, the second threshold may be 60 °; in another example, the first threshold may be 120 ° and the second threshold may be 30 °. This is not limited in the examples of the present application.
In some examples, the user deploys the electronic device such that the value of α increases to a first angle (the first angle being less than a first threshold and greater than a second threshold) and the electronic device remains at the first angle for a first length of time (such as 1 s); or the user bends the electronic device so that the value of alpha is reduced to a first angle, and the electronic device is kept at the first angle for a first time length; the display screen of the electronic device displays a shortcut menu. In one example, the shortcut menu includes a shortcut icon for a makeup application (such as a smart cosmetic box). The user may open the makeup application by clicking on the shortcut icon of the makeup application. Illustratively, as shown in fig. 6, a shortcut menu 602 is displayed on a desktop 601 of the electronic device, and the shortcut menu 602 includes a "smart cosmetic box" icon 603. The user may open the makeup application by clicking on the "smart cosmetic box" icon 603. Optionally, after the desktop 601 pops up the shortcut menu 602, other icons originally displayed on the desktop 601 are displayed in a virtual manner. Illustratively, the electronic device receives an operation of clicking the "smart cosmetic box" icon 603 by the user, and in response to the operation of clicking the "smart cosmetic box" icon 603, the electronic device displays a display interface of the makeup application.
In some embodiments, the display interface of the cosmetic application may include at least one of the first display content 11 and the second display content 12.
In one example, as shown in fig. 7A (a), the display interface of the makeup application includes first display content 11, and the first display content 11 is displayed on a first screen 21 of the display screen. In one implementation, a front-facing camera of an electronic device captures images of a user's face and displays the captured images within a first screen 21 of a display screen. In the embodiments of the present application, this display mode is referred to as a mirror mode of a cosmetic application.
In another example, as shown in (b) of fig. 7A, the display interface of the makeup application includes the second display content 12, and the second display content 12 is displayed on the first screen 21 of the display screen. In one implementation, referring to fig. 4, a front camera of the electronic device collects a facial image of a user, and the facial image of the user is transmitted to the graphics processing engine through the camera driver. And the image processing engine adopts an augmented reality technical algorithm to carry out simulated makeup processing on the user face image acquired by the camera so as to form a simulated makeup image of the user face. The intelligent cosmetic box module controls the display of the simulated cosmetic image of the face of the user in the first screen 21 of the display screen. In the embodiment of the present application, the display mode is referred to as a makeup color mode for makeup application.
In another example, as shown in (c) of fig. 7A, the display interface of the makeup application includes first display content 11 and second display content 12, and the first display content 11 and the second display content 12 are displayed on the first screen 21 of the display screen. In one implementation, the intelligent cosmetic box module controls the user face image collected by the camera and the user face simulated cosmetic image formed by the graphic processing engine to be displayed in the first screen 21 of the display screen in a synchronized split-screen manner. In the examples of the present application, this display mode is referred to as a mixed mode for cosmetic applications.
In some examples, the mirror mode, the make-up mode, and the blend mode may be switched with each other. And when the electronic equipment receives the mode switching operation, switching a display mode. Illustratively, the mode switching operation includes a finger sliding operation on the display screen. Every time the electronic device detects a finger sliding operation (such as a single finger sliding to the right), the display mode is switched according to the sequence of the mirror mode, the makeup color mode and the mixed mode. Illustratively, the mode switching operation includes a first voice. For example, when the electronic device detects that the voice is "switch to make up mode", the electronic device switches to make up mode for display. It is understood that the mode switching operation may also include other forms, such as clicking a switch button, mode switching gesture, and the like, which is not limited in this embodiment.
In some embodiments, the first display content 11 includes an image of the face of the user and an object (e.g., a cosmetic tool, a human hand, etc.) hidden on the face of the user, that is, the image captured by the camera includes an image of the face of the user and an object hidden on the face of the user. In one implementation, an electronic device (e.g., a color makeup area layout display module in fig. 4) calculates an image captured by a camera using an image recognition algorithm, and obtains a facial image (e.g., eyebrows, eyes, nose, mouth, face contour, etc.) and an occlusion object (e.g., a human hand, a makeup tool, etc.) of a user. The electronic device performs simulated makeup processing on the part, which is not shielded by the shielding object, in the image acquired by the camera by adopting an augmented reality technical algorithm to form second display content 12. In the second display content 12, a portion of the user's face that is not occluded by the occluding object displays a simulated makeup image, and a portion that is occluded by the occluding object displays an occluding object (the portion does not include a simulated makeup effect). Therefore, the simulated makeup effect cannot occur on the shielding object in the simulated makeup image displayed by the electronic equipment, and the immersive makeup display experience is brought to the user.
Illustratively, as shown in FIG. 7B, the image of the user's face within the first display frame is partially obscured by lipstick and a human hand. The electronic equipment carries out simulation color makeup processing on parts, which are not blocked by the lipstick and the hands, in the face image of the user to form a simulation makeup-carrying image of the face of the user. As shown in fig. 7B, the user's face in the second display frame simulates the mouth of the makeup image, and the simulated makeup effect is displayed only in the portion that is not blocked.
In some embodiments, a dashed box may be superimposed over the user's face simulated cosmetic image to indicate the cosmetic location and shape, guiding the user to apply makeup. In one example, the dashed box may move as the user's face simulates movement of the cosmetic image such that the dashed box is always located at the indicated location. In one implementation, when the electronic device displays the makeup step guide information 132, a dashed box is displayed at a corresponding position on the user's face simulation makeup image according to the position indicated by the makeup step guide information 132. Illustratively, as shown in fig. 7C, the makeup portion indicated by the makeup step guide information 132 is a mouth, and the corresponding user's face displayed in the second display frame simulates a mouth region with a makeup image, on which a dotted frame 701 is superimposed. The dashed box 701 is used to indicate the location and shape of applying makeup to the mouth. In another implementation, the display interface of the cosmetic application includes an analog button for controlling turning on or off the display of the dashed box function. For example, the user clicks the simulation button to make the simulation button pressed, the function of displaying the dashed box is turned on, and the electronic device displays the dashed box at the corresponding position on the simulated makeup image of the face of the user according to the position indicated by the makeup step guide information 132. And the user clicks the simulation button to enable the simulation button to be in a bounce state, the function of displaying the dotted line frame is closed, and the dotted line frame is not displayed on the facial simulation makeup image of the user. Illustratively, as shown in fig. 7D, the makeup site indicated by the makeup step guide information 132 is the mouth. The display interface of the electronic device includes an "image guide" button 702, and if the electronic device receives an operation of pressing the "image guide" button 702 by the user, the face of the user displayed in the second display frame simulates a mouth region of a makeup image, and a dashed frame 701 is superimposed.
The embodiment of the application provides a display method applied to electronic equipment, and when the electronic equipment displays a display interface of makeup application, image solidification display can be performed on dynamic display content of the display interface, so that a user can conveniently check the display content. As shown in fig. 8, the method may include:
s801, the electronic device displays a first interface of a makeup application, the first interface including at least one of a first display content 11 and a second display content 12.
In one example, the display mode of the cosmetic application is a mirror mode; the first interface includes first display content 11. Illustratively, as shown in fig. 7A (a), the first screen 21 of the electronic device displays an image of the face of the user captured by the camera.
In another example, the display mode of the makeup application is a makeup color mode; the first interface includes second display content 12. Illustratively, as shown in fig. 7A (b), the first screen 21 of the electronic device displays a simulated makeup image of the face of the user.
In another example, the display mode of the cosmetic application is a hybrid mode; the first interface includes a first display content 11 and a second display content 12. Illustratively, as shown in fig. 7A (c), the first screen 21 of the electronic device displays the user face image and the user face simulation makeup image in left and right split screens.
It is understood that in the above examples, the second screen 22 of the electronic device may not display any content (such as turning off the screen), or the second screen 22 of the electronic device may display the third display content 13 (makeup auxiliary information), or the second screen 22 of the electronic device may display other content, which is not limited in this embodiment.
In another example, the first screen 21 and the second screen 22 of the electronic device collectively display the first interface. For example, the first screen 21 of the electronic device displays the first display content 11, and the second screen 22 displays the second display content 12; or the first screen 21 displays the second display content 12, the second screen 22 displays the first display content 11, and so on.
In other examples, the second screen 22 of the electronic device displays the first interface of the makeup application, the first screen 21 of the electronic device may not display any content (such as turning off the screen), or the first screen 21 of the electronic device may display the third display content 13 (makeup auxiliary information), or the first screen 21 of the electronic device may display other content, which is not limited in this embodiment. And will not be described in detail herein.
S802, the electronic equipment receives a first operation.
The first operation may include: voice (e.g., voice "fixed"), gesture (e.g., "OK gesture", "bixin gesture"), expression (e.g., smiley face), click button operation, tap screen operation, and the like.
And S803, in response to receiving the first operation, the electronic equipment displays the first object in the first display frame and displays the second object in the second display frame.
In one implementation, the electronic device receives a first operation of a user, and acquires the first object and/or the second object after delaying for a first duration (e.g., 3 seconds). Therefore, the situation that the user views the makeup is influenced by the fact that the gesture, the expression and the like corresponding to the first operation are included in the interface displayed in the image curing mode can be avoided. In some examples, the electronic device obtains the first object according to the first display content 11 and obtains the second object according to the second display content 12 after receiving the first operation of the user and delaying for a first time period. The electronic equipment stops displaying the first display content in the first display frame and displays the first object in the first display frame; the electronic device stops displaying the second display content in the second display frame and displays the second object in the second display frame. The first display frame is a display area of the first display content 11 on the display screen, and the second display frame is a display area of the second display content 12 on the display screen.
The first object and the second object may be still images or short videos.
In one example, the first object and the second object are still images. The electronic equipment intercepts a current frame of a facial image of a user, which is acquired by a camera, as a first object at a first moment (after a first operation of the user is received and a first time length is delayed). The electronic equipment intercepts a current frame of the face simulation makeup image of the user as a second object at a first moment (after receiving a first operation of the user and delaying for a first time length). Further, the user may save the first object and the second object as pictures.
In another example, the first object and the second object are short videos. The electronic equipment intercepts a current frame and a t (t >0) frame after the current frame of the facial image of the user, which are acquired by the camera, as a first object at a first moment (after receiving a first operation of the user and delaying for a first time length). The electronic equipment intercepts a current frame and a t (t >0) frame after the current frame as a second object, wherein the current frame is used for simulating the makeup image of the face of the user, and the t (t >0) frame is used as the second object at a first moment (after the first operation of the user is received and the first time length is delayed). Further, the user may save the first object and the second object as a video.
In some embodiments, as in fig. 7A (c), the display mode of the cosmetic application is a hybrid mode, and the first interface includes first display content 11 and second display content 12.
In one implementation, in response to receiving the first operation, the electronic device acquires a first object according to the first display content 11, acquires a second object according to the second display content 12, and displays the first object and the second object (an interface displayed by image curing includes the first object and the second object); namely, carrying out image curing display on the user face image and the user face simulation makeup image; the user can compare the user face image with the user face simulation makeup image conveniently, and makeup steps can be adjusted. Further, the first object and the second object may be saved as one picture. Illustratively, as shown in FIG. 9, the electronic device displays an interface 901. The interface 901 comprises a first object 902 and a second object 903. The interface 901 further comprises prompt information 904 for prompting the user to save the first object 902 and the second object 903. Optionally, interface 901 may also include an "ok" button 905 and a "cancel" button 906; an "ok" button 905 is used to determine that the first object 902 and the second object 903 are saved as pictures, and a "cancel" button 906 is used to determine that the first object 902 and the second object 903 are not saved.
In another implementation, in response to receiving the first operation, the electronic device acquires a first object according to the first display content 11, and displays the first object and the second display content 12; namely, only the image curing display is carried out on the face image of the user; it is convenient for the user to carefully view the current face makeup effect. Further, the first object may be saved as a picture.
In another implementation, in response to receiving the first operation, the electronic device acquires a second object according to the second display content 12, and displays the first display content 11 and the second object; only carrying out image curing display on the simulated makeup image of the face of the user; the user can conveniently and carefully check the simulated makeup effect. Further, the second object may be saved as a picture.
Optionally, a corresponding image curing display mode may be implemented according to a user selection. For example, if the 'bixin gesture' of the user is received, the electronic equipment carries out image curing display on the face image of the user and the simulated makeup image of the face of the user; receiving a double-click operation of a user in the first display frame, and carrying out image curing display on the facial image of the user only by the electronic equipment; and receiving the double-click operation of the user in the second display frame, and carrying out image curing display on the face simulation makeup image of the user only by the electronic equipment.
Furthermore, the user can view and edit the saved pictures or videos. For example, the user may perform operations such as enlarging, rotating, cropping, adjusting colors, etc. on the saved picture. As another example, the user may save one or more frames of images in the saved video as pictures. In some examples, the user may also share saved pictures or videos into a social application.
The embodiment of the application provides a display method applied to electronic equipment, and when the electronic equipment displays a display interface of a makeup application, the display content of the display interface can be displayed in an enlarged mode; for easy viewing by the user.
In one example, the electronic device magnifies the display content in a center-magnified manner. In this way, the center of the user's face can be kept at the center of the display. As shown in fig. 10A, the method includes:
s1001, the electronic equipment displays a first interface of the makeup application.
The electronic device displays a first interface of a makeup application, the first interface including at least one of first display content 11, a first object, second display content 12, and a second object. The first display content 11 or the first object is displayed in the first display frame, and the second display content 12 or the second object is displayed in the second display frame.
That is, the electronic device may display a dynamic user face image or a simulated user face image with makeup in an enlarged manner. The display content (the first object or the second object) of the image curing display may also be displayed in an enlarged manner.
S1002, the electronic equipment receives a first input.
The first input may include: voice (e.g., voice "zoom in"), tapping the screen (e.g., single click, double click), clicking a button, etc.
S1003, responding to the first input, and the electronic equipment magnifies and displays the display content of the first interface in a central magnifying mode.
Optionally, the first interface includes first display content 11. In response to receiving the first input, the electronic device displays the enlarged first display content 11 within the first display frame centered at a center point of the first display content 11.
Optionally, the first interface includes the second display content 12. In response to receiving the first input, the electronic device displays the enlarged second display content 12 within the second display frame centered at a center point of the second display content 12.
Optionally, the first interface includes a first object. In response to receiving the first input, the electronic device displays the enlarged first object within the first display frame centered on a center point of the first object.
Optionally, the first interface includes a second object. In response to receiving the first input, the electronic device displays the enlarged second object within the second display frame centered on a center point of the second object.
For the case where the display mode of the cosmetic application is a mixed mode (i.e., the first interface includes the first display frame and the second display frame), in one implementation, in response to receiving a first input (which may be applied within the first display frame or the second display frame, or which may also be applied to a display area outside of the first display frame and the second display frame), the electronic device simultaneously enlarges the display content within the first display frame and the second display frame. Illustratively, referring to fig. 10B, the first screen 21 of the display screen of the electronic device displays the first display content 11 (user face image) and the second display content 12 (user face simulation makeup image). The electronic device receives an operation of the user clicking and dragging the button 100a to the right, and in response to the operation of the user clicking and dragging the button 100a to the right, the electronic device displays the user face image and the user face simulated cosmetic image in an enlarged manner in a center. In another implementation, in response to receiving a first input (which may be applied within the first display frame or the second display frame, or which may also be applied to a display area outside of the first display frame and the second display frame), the electronic device only enlarges the display content within the first display frame or the second display frame. Illustratively, as shown in fig. 10C, the first screen 21 of the display screen of the electronic device displays the first display content 11 (user face image) and the second display content 12 (user face simulation makeup image). The electronic equipment receives the operation of clicking the screen by the user (the area of the clicking screen is positioned in the first display frame), responds to the operation of clicking the screen, and magnifies and displays the face image of the user in a central magnification mode, wherein the display mode of simulating the makeup image on the face of the user is unchanged. Illustratively, as shown in fig. 10D, the first screen 21 of the display screen of the electronic device displays the first display content 11 (user face image) and the second display content 12 (user face simulation makeup image). The electronic equipment receives the operation of clicking the screen by the user (the area of the clicking screen is positioned in the second display frame), responds to the operation of clicking the screen, and magnifies and displays the simulated makeup image of the face of the user in a central magnification mode, wherein the display mode of the facial image of the user is unchanged.
In some examples, displaying the magnified image within the first display frame is synchronized with displaying the magnified image within the second display frame. For example, the animation of displaying the magnified image in the first display frame is played in synchronization with the animation of displaying the magnified image in the second display frame.
In one implementation, every time the electronic device receives the first input, the display content of the current interface is displayed in an n-fold magnification mode (n is a default value, such as 2, 3, 5, 10, and the like).
In another example, the electronic device enlarges the display content in a trace zoom manner. In this way, the tracking area can be always located at the center of the display. As shown in fig. 11A, the method includes:
s1101, the electronic device displays a first interface of the makeup application.
S1001 may be referred to as a specific step of S1101, and details are not repeated here.
S1102, the electronic equipment receives a second input.
Wherein the second input may be the same as the first input or different from the first input.
The second input may include: voice (e.g., voice "enlarge eyes", voice "enlarge mouth"), gesture operation by the user on the display screen (e.g., double-finger outspread operation), and the like.
And S1103, acquiring the tracking area according to the second input.
In one example, the electronic device receives speech and determines the tracking area based on the semantics of the speech. For example, if the voice "magnify eyes" is received, the tracking area is determined to be the display area on the display screen where the eyes of the user are located.
In another example, the electronic device receives gesture operation of a user on the display screen, and determines the tracking area according to the operation position of the gesture of the user on the display screen. For example, when the double-finger extension operation is received, when the double fingers leave the display screen, the middle point of the connecting line of the contact points of the double fingers on the display screen is determined as the tracking area. And if the operation position of the gesture of the user on the display screen is different, the determined tracking area is different.
And S1104, in response to receiving the second input, the electronic device displays the display content of the first interface in an enlarged manner in a tracking manner.
In some embodiments, the tracking area is located within the first display frame. In response to receiving the second input, the electronic device displays the enlarged first display content 11 or the first object within the first display frame centered on the tracking area. It will be appreciated that different portions of the simulated cosmetic image of the user's face are displayed within the first display frame depending on the tracking area determined by the second input. Illustratively, as shown in fig. 11B, the first screen 21 of the display screen of the electronic device displays the first display content 11 (user face image) and the second display content 12 (user face simulation makeup image). The electronic equipment receives the double-finger expansion operation in the first display frame (when the double fingers leave the display screen, the connecting midpoint of the contact points of the double fingers on the display screen is the mouth of the user), and in response to the double-finger expansion operation, the electronic equipment enlarges and displays the face image of the user in a tracking and enlarging mode. In the enlarged face image of the user, the mouth of the user is located at the center point of the first display frame. Optionally, the electronic device displays the enlarged second display content 12 or the second object in the second display frame with the corresponding position of the tracking area in the second display frame as the center. Wherein the enlarged image displayed in the second display frame corresponds to the enlarged image displayed in the first display frame. In one implementation, the magnified image displayed within the second display frame is a simulated cosmetic image of the magnified image displayed within the first display frame. For example, if the tracking area is the display area where the user's eyes are located, the corresponding position of the tracking area in the second display frame is the display area where the user's eyes are located in the simulated makeup image. Illustratively, as shown in fig. 11C, the first screen 21 of the display screen of the electronic device displays the first display content 11 (user face image) and the second display content 12 (user face simulation makeup image). The electronic equipment receives the double-finger outward expansion operation in the first display frame (when the double fingers leave the display screen, the connecting midpoint of the contact points of the double fingers on the display screen is the mouth of the user), and in response to the double-finger outward expansion operation, the electronic equipment displays the face image of the user and the simulated makeup-carrying face image of the user in an enlarged and tracing mode. In the enlarged face image of the user, the mouth of the user is positioned at the center point of the first display frame; in the enlarged face simulation makeup image of the user, the mouth of the user is positioned at the center point of the second display frame.
In some embodiments, the tracking area is located within the second display frame. In response to receiving the second input, the electronic device displays the enlarged second display content 12 or the second object within the second display frame centered on the tracking area. Optionally, the electronic device displays the enlarged first display content 11 or the first object in the first display frame with a corresponding position of the tracking area in the first display frame as a center (for example, if the tracking area is a display area where the eyes in the simulated makeup image are located, the corresponding position of the tracking area in the first display frame is a display area where the eyes of the user are located).
In some examples, displaying the magnified image within the first display frame is synchronized with displaying the magnified image within the second display frame. For example, the animation of displaying the magnified image in the first display frame is played in synchronization with the animation of displaying the magnified image in the second display frame.
In one implementation, every time the electronic device receives the second input, the display content of the current interface is displayed in an n-fold magnification mode (n is a default value, such as 2, 3, 5, 10, and the like).
In one implementation, in a dynamic user facial image that is displayed enlarged within a first display frame, a tracking area remains displayed at a center point of the first display frame; in the dynamic user face simulation makeup image which is displayed in the second display frame in an enlarged manner, the tracking area is kept displayed at the center point of the second display frame. Illustratively, the tracking area is the user's mouth. After a camera of the electronic equipment acquires a facial image of a user, the image tracking engine positions the position of the mouth in each frame of image through an image recognition algorithm. And the solidification and amplification control module controls to display the amplified face image of the user with the mouth position as a central point in a first display frame of the display screen. Optionally, a magnified simulated cosmetic image of the face of the user with the mouth position as a center point is displayed in a second display frame of the display screen. When the user moves within the visual range of the camera, the mouth area is always enlarged and displayed on the display screen.
In some embodiments, a dashed box may be superimposed on the enlarged image to indicate the makeup area and shape, and guide the user to apply makeup. Illustratively, as shown in FIG. 11D, the enlarged user's face displayed within the second display frame simulates the mouth area of the cosmetic image, superimposed by dashed box 110 a. The dashed box 110a is used to indicate the location and shape of the make-up for the mouth. In some examples, a magnified user facial simulation cosmetic image is displayed within a second display frame of the electronic device display screen. The electronic device receives an operation that the user clicks the simulation button to make the simulation button in a pressed state, and displays a dotted frame at a corresponding position of the enlarged and displayed user face simulation makeup image according to the makeup part indicated by the makeup step guide information 132. In other examples, the electronic device displays a simulated cosmetic image of the user's face including a dashed box thereon, the first input or the second input being received, the dashed box enlarging as the simulated cosmetic image of the user's face enlarges.
In one example, the dashed box may move as the enlarged user face image moves so that the dashed box is always located at the indicated position.
The embodiment of the application provides a display method applied to an electronic device, and a user can select different makeup parameters on a display interface of a makeup application, so that the second display content 12 (a makeup image of the face of the user) presents corresponding effects. Thus, the user can make up against the effect of the makeup image on the face of the user.
In some embodiments, the display interface of the makeup application includes third display content 13, and the third display content 13 is used for displaying makeup auxiliary information. In some examples, makeup assistant information may include a makeup pan 131, the makeup pan 131 including a plurality of makeup parameters. In one example, referring to fig. 12 (a), the makeup pan 131 includes a "recommend" option 1310, a "partial makeup" option 1320, a "make-up-all" option 1330, a "favorite" option 1340, a "custom" option 1350, and the like. The user can click on any one of the options to open the corresponding page.
Wherein, the integral makeup page comprises one or more integral makeup examples, and each integral makeup example comprises parameters of each partial makeup corresponding to the integral makeup. The user may select one of the overall makeup examples to cause the second display content 12 to present a corresponding overall makeup effect. Illustratively, the user clicks on the "make-up-all" option 1330 shown in FIG. 12 (a). In response to the click operation, the electronic device displays a "full makeup" page 1331 shown in (b) of fig. 12. The "integral color makeup" page 1331 includes various integral color makeup examples such as "vintage", "fashion", "fresh", "make-up", "peach blossom", "smoke", and the like. For example, the color of the foundation of the integral color makeup with the 'fresh' style is '# 1', the color of the eyeliner is 'black 01', the color of the eyebrow is 'light gray', the color of the eyeshadow is 'peach', the color of the lip color is 'RD 06', the color of the blush is 'light powder', and the like. The user clicks the "fresh" option, selects the "fresh" style of the overall color makeup, and accordingly, the simulated color makeup of the second display content 12 (the user's facial makeup image) appears as the "fresh" style, with the foundation color being "# 1", the eyeliner color being "black 01", the eyebrow color being "light gray", the eyeshadow color being "nectarine", the lip color being "RD 06", the blush color being "light pink", and so on.
Optionally, the user clicks an icon of a partial makeup in the whole makeup example, and the electronic device displays corresponding makeup step guidance information 132. Illustratively, the user clicks the "lip gloss" icon in the interface shown in (b) of fig. 12, and the electronic device displays the interface shown in (c) of fig. 12, which includes the makeup step guidance information "step 5/12: RD06 lipstick was selected to paint the upper lip. "
The "partial makeup" page includes a plurality of partial makeup options, for example, the "partial makeup" page includes a plurality of options such as "foundation", "eyeliner", "eyebrow", "eyeshadow", "lip gloss", "blush", and the like. Wherein each of the partial makeup options includes one or more makeup parameters. The user may select one of the makeup parameters to cause the second display content 12 to present a corresponding partial makeup effect. Illustratively, as shown in FIG. 13A, the user may click on the "partial make-up" option 1320 of the make-up tray. In response to the clicking operation, the electronic device displays a "partial makeup" page 1321; the "local makeup" page 1321 includes various options such as "foundation", "eyeliner", "eyebrow", "eyeshadow", "lip gloss", and "blush". For example, the user clicks on the "lip gloss" option, and selects the make-up parameter "PK 04"; accordingly, the mouth color of the simulated makeup color of the second display content 12 (the user's facial makeup image) appears as the color indicated by PK 04.
Optionally, the electronic device supports the user to add a customized makeup parameter on the local makeup page. Illustratively, as shown in FIG. 13B, a "partial makeup" page 1321 includes a "lip gloss" option and a "lip gloss" page 1322 includes a plurality of lip gloss colors. The user may press (e.g., press the display screen for more than 3 seconds) the space of the "lip gloss" page 1322 for a long time, and the electronic device receives the operation of pressing the space of the "lip gloss" page 1322 for a long time, and displays the add icon 1323. In response to receiving the user click on the add icon 1323, the electronic device displays a text box 1324 and an "open" button 1325 on the "lip gloss" page 1322. The user may enter a file location or path in text box 1324 and click on the "open" button 1325. The electronic device receives an operation to click the "open" button 1325, and in response to the operation to click the "open" button 1325, displays a picture 1326 on the "lip gloss" page 1322. Illustratively, the picture 1326 includes a color and name of "Huahong" lip gloss. The electronic device may receive a double-click operation of the user on the picture 1326, and in response to the double-click operation of the user on the picture 1326, the "lip gloss" page 1322 adds the "hua is red" option 1327; where the color displayed in the "Hua is Red" option 1327 is the color of the lip gloss "Hua is Red" in the picture 1326, and the name of the "Hua is Red" option 1327 is the name of the lip gloss "Hua is Red" in the picture 1326 (or specified by the user).
The "recommendations" page includes one or more integral makeup examples. The one or more integral makeup examples are generated by the electronic device based on facial features of the user. For example, the matching eyebrow shape in the whole makeup example is generated according to the face shape of the user, the corresponding foundation color in the whole makeup example is generated according to the skin color of the user, and the like. Illustratively, as shown in FIG. 14, the user may click on the "recommend" option 1310 of the cosmetic tray. In response to the clicking operation, the electronic device displays a "recommendations" page 1311; the "recommendations" page 1311 includes a variety of overall color cosmetics examples of "shopping," "appointments," "professions," "sports," and so forth. Each of the plurality of integral makeup examples is generated based on facial features of the user. The user may select one of the overall makeup examples to cause the second display content 12 to present a corresponding overall makeup effect.
The "favorites" page includes one or more integral makeup examples. The one or more integral makeup examples are saved on a "favorites" page at the user's option. For example, as shown in fig. 15A (a), in the "make-up overall" page 1331, the electronic device receives an operation of pressing the "fresh" option for a long time (for example, pressing for more than 3 seconds) by the user; alternatively, as shown in fig. 15A (b), the second display content 12 presents the overall makeup effect corresponding to the "fresh" option of the "overall makeup" page 1331. The electronic equipment receives the operation of pressing the display screen for a long time by a user (the pressing area is positioned in the second display frame); the electronic equipment adds the fresh integral color makeup into the collection page. Illustratively, as shown in FIG. 15B, the user may click on the "favorites" option 1340 of the cosmetic tray. In response to the click operation, the electronic device displays a "favorites" page 1341; the "favorites" page 1341 includes various integral makeup examples such as "favorites 1", "favorites 2", "favorites 3", "favorites 4", and "favorites 5". For example, the "favorites 5" option is a "fresh" style of overall color makeup saved by the method of FIG. 15A. The user may select one of the global makeup examples in the "favorites" page 1341 to cause the second display content 12 to present a corresponding global makeup effect.
In some examples, the electronic device may generate a custom overall makeup example, or makeup parameters for a partial makeup, from a picture uploaded by the user. The user may select a custom integral makeup example to cause the second display content 12 to present a corresponding integral makeup effect; customized makeup parameters may also be selected to cause the second display content 12 to present a corresponding partial makeup effect.
In one implementation, an electronic device receives a picture uploaded by a user, wherein the picture comprises a face image with makeup. The electronic device (for example, the makeup extraction control module in fig. 4) extracts feature data (for example, shape features: face shape, eyebrow shape, lip shape, and the like, and color features: foundation color, eye shadow color, eyebrow color, and the like) of the face image in the picture. The electronic device creates makeup parameters (such as eye shadow, eyeliner, lip gloss, eyebrow, blush, etc.) based on the feature data (shape features, color features, etc.). Optionally, the makeup parameters of each part of the face can be packaged and stored as the whole makeup; optionally, the makeup parameters of each part of the face can be respectively stored as the makeup parameters of the local makeup.
Illustratively, as shown in FIG. 16A, the electronic device receives a user click operation on a "custom" option 1350. In response to the clicking operation, the electronic device displays a "custom" page 1351. The user can open a picture on the "custom" page 1351. An exemplary "custom" page 1351 includes a text box 1352. The user may enter a picture save location or path in the text box 1352 and click the "open" button 1353. The electronic device receives an operation of clicking the "open" button 1353, and in response to the operation of clicking the "open" button 1353, displays a "custom" page 1354. The "custom" page 1354 includes one or more pictures. For example, the electronic device receives an operation of clicking a picture by a user, extracts feature data of a face image in the picture, and creates a makeup parameter according to the feature data. Illustratively, the electronic device displays a "custom" page 1355. "custom" page 1355 includes makeup parameters generated by the electronic device from the picture. Optionally, a "custom" page 1355 includes a "save all" button 1356, a "save part" button 1357, a "cancel" button 1358; the 'save all' button 1356 is used for packaging and saving the makeup parameters generated by the electronic device according to the pictures into integral makeup, the 'save part' button 1357 is used for saving the makeup parameters generated by the electronic device according to the pictures into makeup parameters of the partial makeup, and the 'cancel' button 1358 is used for not saving the makeup parameters generated by the electronic device according to the pictures. In one example, as shown in fig. 16B, the electronic device receives a user click operation on the "save all" button 1356, and in response to the click operation on the "save all" button 1356, the electronic device saves the color cosmetic parameters (foundation color "# 3", eyeliner color "black 01", eyebrow color "light brown", eyeshadow color "golden brown", lip color "RD 04", blush color "pink") included in the "custom" page 1355 as a whole color cosmetic example. As shown in fig. 16B, a "whole makeup" page 1331 is added with a "custom 1" whole makeup example. In another example, as shown in fig. 16C, the electronic device receives a user click operation on a "save part" button 1357, and in response to the user click operation on the "save part" button 1357, the electronic device saves the color cosmetic parameters (eye shadow color is "gold brown" and blush color is "pink orange") selected by the user in the "custom" page 1355 as color cosmetic parameters of the partial color cosmetic, respectively. As shown in fig. 16C, the blush color of the "local makeup" page 1321 is added with the "pink orange" option, and the eye shadow color of the "local makeup" page 1321 is added with the "golden brown" option.
It is understood that the above-mentioned picture may include a complete cosmetic face image, only a partial face image, or only a part of a face (such as eyebrow, mouth, nose, etc.).
In some examples, the electronic device may generate a customized integral makeup example, or makeup parameters for the partial makeup, from the display content within the first display frame. Illustratively, displayed within the first display frame is a user-finished facial image. The electronic equipment receives an operation of extracting facial makeup parameters of the user (for example, a finger joint clicks an area in a first display frame of a display screen), and then generates a self-defined integral makeup example or makeup parameters of local makeup according to a facial image of the user.
In one implementation, a camera of an electronic device collects a facial image of a user with makeup and displays the facial image of the user with makeup in a first display frame. The electronic device receives the operation of extracting the facial makeup parameters of the user, and the electronic device (for example, the makeup control module extracted in fig. 4) extracts feature data (for example, shape features: face shape, eyebrow shape, lip shape, and the like, and color features: foundation color, eye shadow color, eyebrow color, and the like) of the face image in the facial image with makeup of the user. The electronic device creates makeup parameters (such as eye shadow, eyeliner, lip gloss, eyebrow, blush, etc.) based on the feature data (shape features, color features, etc.). Optionally, the makeup parameters of each part of the face can be packaged and stored as the whole makeup; optionally, the makeup parameters of each part of the face can be respectively stored as the makeup parameters of the local makeup.
Illustratively, as shown in fig. 16D, the first screen 21 of the display screen of the electronic device displays a user cosmetic facial image and a user facial simulated cosmetic image. The electronic equipment receives the operation of clicking the area in the first display frame of the display screen by the finger joint of the user, responds to the operation of clicking the area in the first display frame of the display screen by the finger joint, extracts the characteristic data of the face image in the face image with makeup of the user, and creates the makeup color parameters according to the characteristic data. The electronic device packages and stores the generated makeup parameters into an integral makeup, and the integral makeup page 1331 adds an integral makeup example of custom 2.
In some embodiments, the electronic device may receive a user modification to the makeup parameters, for example, the user may modify a makeup effect of any portion of the user's face simulation makeup image within the second display frame. For example, modifying the eyebrow shape, lip shape, eye shadow color, etc. The electronic device saves the modified color cosmetic parameters. In one implementation, the modified makeup parameters may be saved separately. In another implementation, the entire makeup after the makeup parameters are modified may be saved as an example of the entire makeup.
It can be understood that the electronic device may receive, on the dynamically displayed second display content 12, the modification of the user on the color cosmetic parameter, may also receive, on the second display content 12 that is displayed in a solidified manner, the modification of the user on the color cosmetic parameter, and may also receive, on the second display content 12 that is displayed in an enlarged manner, which is not limited in this embodiment of the application.
Illustratively, as shown in fig. 17, the electronic device receives a user's modification operation on the blush (e.g., a single-finger long-press drag operation, a click-and-drag operation, etc.) in a second display frame on the display screen, and in response to the modification operation on the blush, the user's face displayed in the second display frame simulates a change in the shape of the blush with the makeup image (the shape of the blush changes with the position of the finger drag). Optionally, the display interface within the second display frame includes a "save" icon 1701 and a "cancel" icon 1702; the "save" icon 1701 is used to save the makeup parameters of the modified user's face simulation makeup image, and the "cancel" icon 1702 is used to not save the makeup parameters of the modified user's face simulation makeup image. The electronic device receives a user click operation on the save icon 1701, and saves the color makeup parameters of the modified user face simulation makeup image in response to the user click operation on the save icon 1701. For example, the modified makeup parameters of the user face simulation makeup image are saved as an integral makeup example, and the user can view the image in the integral makeup of the makeup tray.
In one implementation, a sensor of an electronic device detects a single-finger-length pressing and dragging operation of a user on a display screen, and notifies a sensor service through sensor driving. The sensor service determines that a single-finger-length pressing and dragging operation of a user on a screen is received, generates a makeup modification event, and reports the makeup modification event (the makeup modification event comprises a finger dragging position) to the event distribution management service. The event distribution management service distributes a modified makeup event (the modified makeup event includes a finger drag location) to a makeup area layout display module. And the makeup area layout display module generates a modified user face simulation makeup image according to the finger dragging position. Alternatively, a sensor of the electronic device detects a user's click operation on the "save" icon 1701 on the display screen, and notifies the sensor service through sensor actuation. The sensor service determines that a click operation of the user on the save icon 1701 on the display screen is received, generates a save makeup parameter event, and reports the save makeup parameter event to the event distribution management service. The event distribution management service distributes the stored makeup parameter events to the makeup extraction control module, and the makeup extraction control module analyzes the simulated makeup images of the face of the user in the second display frame to obtain makeup parameters; and storing the obtained color makeup parameters.
Optionally, in some embodiments, the electronic device detects that the surrounding environment is dark (for example, the ambient light brightness detected by the sensor of the electronic device is less than the preset value), for example, as shown in fig. 18, the electronic device displays a fill-in light effect (for example, highlighting a color ring or a color lamp) in a peripheral area of the display screen of the electronic device. Like this, can help the user to realize the effect of light filling when making up and looking into the mirror, further improve user's use and experience.
It is understood that the electronic device includes hardware structures and/or software modules for performing the functions in order to realize the functions. Those of skill in the art would appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
In the case of an integrated unit, fig. 19 shows a schematic view of a possible configuration of the electronic device involved in the above-described embodiment. The electronic device 2000 includes: a processing unit 2001, a display unit 2002, and a storage unit 2003.
The processing unit 2001 is configured to control and manage operations of the electronic device 2000. For example, the method can be used for executing the processing step of image solidification on the dynamic display content of the display interface of the makeup application in the embodiment of the application; a processing step of amplifying the display content of the makeup application display interface; extracting makeup parameters, controlling makeup area layout, controlling makeup tray layout, controlling makeup mode switching, controlling makeup mirror light supplementing processing and the like; and/or other processes for the techniques described herein.
A display unit 2002 for displaying an interface of the electronic device 2000. For example, the method can be used for displaying a user face image in a first display frame and displaying a user face simulation makeup image in a second display frame; display makeup auxiliary information, and the like.
A storage unit 2003 for storing program codes and data of the electronic apparatus 2000.
Of course, the unit modules in the electronic device 2000 include, but are not limited to, the processing unit 2001, the display unit 2002, and the storage unit 2003. For example, the electronic device 2000 may further include a detection unit. The detection unit may be used to detect a user's motion, gesture, etc.
The processing unit 2001 may be a processor or a controller, such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The processor may include an application processor and a baseband processor. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The display unit 2002 may be a display screen. The storage unit 2003 may be a memory. The detection unit may be a sensor, a touch device, a camera, or the like.
For example, the processing unit 2001 is a processor (e.g., the processor 110 shown in fig. 3), the display unit 2002 is a display screen (e.g., the display screen 194 shown in fig. 3, the display screen 194 may be a touch screen, and a display panel and a touch panel may be integrated in the touch screen), the storage unit 2003 may be a memory (e.g., the internal memory 121 shown in fig. 3), and the detection unit may include a sensor (e.g., the sensor module 180 shown in fig. 3) and a camera (e.g., the camera 193 shown in fig. 3). The electronic device 2000 provided in the embodiment of the present application may be the electronic device 300 shown in fig. 3. Wherein the processor, the display screen, the memory, etc. may be coupled together, for example, by a bus connection.
The embodiment of the present application further provides a computer-readable storage medium, where a computer program code is stored in the computer-readable storage medium, and when the computer program code is executed by the processor, the electronic device executes the relevant method steps in the embodiments.
The embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to execute the relevant method steps in the above embodiments.
The electronic device 2000, the computer-readable storage medium, or the computer program product provided in the embodiments of the present application are all configured to execute the corresponding methods provided above, so that the beneficial effects achieved by the electronic device can refer to the beneficial effects in the corresponding methods provided above, and are not described herein again.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical functional division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (30)

1. A display method applied to an electronic device, comprising:
a first area of a display screen of the electronic equipment displays a user face image acquired by a camera;
a second area of the display screen of the electronic equipment displays a user face simulation makeup image generated according to the user face image;
receiving a first input;
in response to the first input, displaying an enlarged image of at least a portion of the user's facial image in the first region.
2. The method of claim 1, wherein receiving the first input comprises:
receiving a first input acting on the first area; or
A first input is received that acts on the second region.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
in response to the first input, displaying a magnified image of at least a portion of the user's face simulated cosmetic image in the second area.
4. The method of claim 3, further comprising:
if the first input acts on a first position of a display screen of the electronic equipment, displaying a first image in the first area, and displaying a simulated makeup image of the first image in the second area; the first image is an enlarged image of a portion of the user's facial image;
if the first input acts on a second position of the display screen of the electronic equipment, displaying a second image in the first area, and displaying a simulated makeup image of the second image in the second area; the second image is an enlarged image of a portion of the user's facial image;
wherein the first location is different from the second location and the first image is different from the second image.
5. The method of claim 3 or 4, wherein the portion of the user face image corresponds to the portion of the user face simulated cosmetic image.
6. The method according to any one of claims 3 to 5, wherein the user's face simulation makeup image includes first indication information indicating a makeup part and a shape; the method further comprises the following steps:
the first indication information is magnified as the user's face simulates the magnification of a cosmetic image.
7. The method according to any one of claims 1 to 6,
and if the object image sheltered from the face of the user is superposed on the face image of the user, the simulated makeup image is not displayed on the object image sheltered from the face of the user on the face simulated makeup image of the user.
8. The method according to any one of claims 1 to 7,
the first area of the electronic equipment display screen displays the facial image of the user collected by the camera, and the method comprises the following steps:
a first area of the display screen of the electronic equipment displays a first static image acquired according to a user face image acquired by a camera;
the step of displaying the user face simulation makeup image generated according to the user face image in the second area of the electronic equipment display screen comprises the following steps:
and a second area of the display screen of the electronic equipment displays a second static image formed by simulating the facial image of the user with makeup.
9. The method according to any one of claims 1-8, further comprising:
displaying the first information in a third area of the display screen of the electronic equipment;
receiving input operation of a user on the first information; and changing the simulated makeup image of the face of the user according to the first information.
10. The method of claim 9, wherein the first information is generated from features of a user's facial image.
11. The method of claim 9, further comprising:
receiving modification operation of a user on the user face simulation makeup image;
and generating the first information according to the modified user face simulation makeup image.
12. An electronic device, characterized in that the electronic device comprises:
a memory;
a processor invoking one or more computer programs stored in the memory, the one or more computer programs comprising instructions that, when executed by the processor, cause the electronic device to perform:
a first area of the electronic equipment display screen displays a user face image collected by a camera;
a second area of the electronic equipment display screen displays a user face simulation makeup image generated according to the user face image;
receiving a first input;
in response to the first input, displaying an enlarged image of at least a portion of the user's facial image in the first region.
13. The electronic device of claim 12, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
receiving a first input acting on the first area; or
A first input is received that acts on the second region.
14. The electronic device of claim 12 or 13, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
in response to the first input, displaying a magnified image of at least a portion of the user's face simulated cosmetic image in the second area.
15. The electronic device of claim 14, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
if the first input acts on a first position of a display screen of the electronic equipment, displaying a first image in the first area, and displaying a simulated makeup image of the first image in the second area; the first image is an enlarged image of a portion of the user's facial image;
if the first input acts on a second position of the display screen of the electronic equipment, displaying a second image in the first area, and displaying a simulated makeup image of the second image in the second area; the second image is an enlarged image of a portion of the user's facial image;
wherein the first location is different from the second location and the first image is different from the second image.
16. The electronic device of claim 14 or 15, wherein the portion of the user face image corresponds to the portion of the user face simulated cosmetic image.
17. The electronic device according to any one of claims 14 to 16, wherein the user face simulation makeup image includes first indication information thereon, the first indication information indicating a makeup portion and a shape; the first indication information is magnified as the user's face simulates the magnification of a cosmetic image.
18. The electronic device of any of claims 12-17, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
a first area of the display screen of the electronic equipment displays a first static image acquired according to a user face image acquired by a camera;
and a second area of the display screen of the electronic equipment displays a second static image formed by simulating the facial image of the user with makeup.
19. The electronic device of any of claims 12-18, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
a third area of the display screen of the electronic equipment displays first information;
receiving input operation of a user on the first information; and changing the simulated makeup image of the face of the user according to the first information.
20. The electronic device of claim 19, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
the first information is generated according to features of a user face image.
21. The electronic device of claim 19, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
receiving modification operation of a user on the user face simulation makeup image;
and generating the first information according to the modified user face simulation makeup image.
22. A method for displaying a graphical user interface, comprising:
the electronic device displays a first Graphical User Interface (GUI);
the first region of the first GUI comprises a user facial image captured by a camera;
the second region of the first GUI comprises a user face simulation cosmetic image generated from the user face image;
in response to receiving the first input, the electronic device displays a second GUI;
the first region of the second GUI comprises a magnified image of at least a portion of the user's facial image;
the first area of the second GUI and the first area of the first GUI are the same display area on a display screen.
23. The method of claim 22,
a second region of the second GUI includes a magnified image of at least a portion of the user's face simulating a cosmetic image; and the second area of the second GUI and the second area of the first GUI are the same display area on the display screen.
24. The method of claim 23,
if the first input acts on a first position of the display screen of the electronic equipment, a first image is displayed in a first area of the second GUI, and a simulated makeup image of the first image is displayed in a second area of the second GUI; the first image is an enlarged image of a portion of the user's facial image;
if the first input acts on a second position of the display screen of the electronic equipment, displaying a second image in a first area of the second GUI, and displaying a simulated makeup image of the second image in a second area of the second GUI; the second image is an enlarged image of a portion of the user's facial image;
wherein the first location is different from the second location and the first image is different from the second image.
25. The method of claim 23 or 24, wherein the portion of the user facial image corresponds to the portion of the user facial simulated cosmetic image.
26. The method of any one of claims 23-25,
the second area of the first GUI further includes first indication information for indicating a makeup part and a shape;
the second area of the second GUI further includes first indication information displayed enlarged as the user's face simulates enlargement of a cosmetic image.
27. The method of any one of claims 22-26,
and if the first area of the first GUI comprises the object image sheltered on the face of the user, the object image sheltered on the face of the user in the second area of the first GUI does not display the simulated makeup image.
28. The method of any one of claims 22-27,
a first region of the first GUI comprises a first still image obtained from a user's facial image captured by a camera;
the second region of the first GUI includes a second still image that simulates a makeup formation for the user's facial image.
29. The method of any of claims 22-28, wherein the third region of the first GUI includes first information, the method further comprising:
receiving input operation of a user on the first information;
responding to the input operation of the user on the first information, and displaying a third GUI by the electronic equipment; a second region of the third GUI includes a simulated cosmetic image of the user's face formed from the first information; and the second area of the third GUI and the second area of the first GUI are the same display area on the display screen.
30. A computer-readable storage medium comprising computer instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-11.
CN202010880184.0A 2020-08-27 2020-08-27 Display method applied to electronic equipment and electronic equipment Active CN114115617B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010880184.0A CN114115617B (en) 2020-08-27 2020-08-27 Display method applied to electronic equipment and electronic equipment
PCT/CN2021/108283 WO2022042163A1 (en) 2020-08-27 2021-07-23 Display method applied to electronic device, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010880184.0A CN114115617B (en) 2020-08-27 2020-08-27 Display method applied to electronic equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN114115617A true CN114115617A (en) 2022-03-01
CN114115617B CN114115617B (en) 2024-04-12

Family

ID=80352610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010880184.0A Active CN114115617B (en) 2020-08-27 2020-08-27 Display method applied to electronic equipment and electronic equipment

Country Status (2)

Country Link
CN (1) CN114115617B (en)
WO (1) WO2022042163A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315165B (en) * 2023-11-28 2024-03-12 成都白泽智汇科技有限公司 Intelligent auxiliary cosmetic display method based on display interface

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000076398A1 (en) * 1999-06-14 2000-12-21 The Procter & Gamble Company Skin imaging and analysis systems and methods
CN109658167A (en) * 2017-10-10 2019-04-19 阿里巴巴集团控股有限公司 Try adornment mirror device and its control method, device
US20190182432A1 (en) * 2017-12-13 2019-06-13 Canon Kabushiki Kaisha Display control apparatus and control method for the same
CN110045872A (en) * 2019-04-25 2019-07-23 廖其锋 Daily smart mirror and application method
CN111047384A (en) * 2018-10-15 2020-04-21 北京京东尚科信息技术有限公司 Information processing method of intelligent device and intelligent device
CN111553220A (en) * 2020-04-21 2020-08-18 海信集团有限公司 Intelligent device and data processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000076398A1 (en) * 1999-06-14 2000-12-21 The Procter & Gamble Company Skin imaging and analysis systems and methods
CN109658167A (en) * 2017-10-10 2019-04-19 阿里巴巴集团控股有限公司 Try adornment mirror device and its control method, device
US20190182432A1 (en) * 2017-12-13 2019-06-13 Canon Kabushiki Kaisha Display control apparatus and control method for the same
CN111047384A (en) * 2018-10-15 2020-04-21 北京京东尚科信息技术有限公司 Information processing method of intelligent device and intelligent device
CN110045872A (en) * 2019-04-25 2019-07-23 廖其锋 Daily smart mirror and application method
CN111553220A (en) * 2020-04-21 2020-08-18 海信集团有限公司 Intelligent device and data processing method

Also Published As

Publication number Publication date
CN114115617B (en) 2024-04-12
WO2022042163A1 (en) 2022-03-03

Similar Documents

Publication Publication Date Title
CN111124561B (en) Display method applied to electronic equipment with folding screen and electronic equipment
CN112217923B (en) Display method of flexible screen and terminal
CN112130742B (en) Full screen display method and device of mobile terminal
WO2020077511A1 (en) Method for displaying image in photographic scene and electronic device
CN112445448B (en) Flexible screen display method and electronic equipment
CN109274828B (en) Method for generating screenshot, control method and electronic equipment
CN111669459B (en) Keyboard display method, electronic device and computer readable storage medium
CN112887583B (en) Shooting method and electronic equipment
CN111078091A (en) Split screen display processing method and device and electronic equipment
WO2020029306A1 (en) Image capture method and electronic device
CN110633043A (en) Split screen processing method and terminal equipment
CN113973189B (en) Display content switching method, device, terminal and storage medium
WO2020118490A1 (en) Automatic screen-splitting method, graphical user interface, and electronic device
CN113170037A (en) Method for shooting long exposure image and electronic equipment
CN114089932A (en) Multi-screen display method and device, terminal equipment and storage medium
WO2021042878A1 (en) Photography method and electronic device
CN113986070A (en) Quick viewing method for application card and electronic equipment
WO2022143180A1 (en) Collaborative display method, terminal device, and computer readable storage medium
CN110286975B (en) Display method of foreground elements and electronic equipment
WO2022042163A1 (en) Display method applied to electronic device, and electronic device
WO2022078116A1 (en) Brush effect picture generation method, image editing method and device, and storage medium
CN113495733A (en) Theme pack installation method and device, electronic equipment and computer readable storage medium
CN113448658A (en) Screen capture processing method, graphical user interface and terminal
WO2023207844A1 (en) Dynamic wallpaper display method and apparatus, and electronic device
CN115808997A (en) Preview method, electronic equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant