CN114115617B - Display method applied to electronic equipment and electronic equipment - Google Patents

Display method applied to electronic equipment and electronic equipment Download PDF

Info

Publication number
CN114115617B
CN114115617B CN202010880184.0A CN202010880184A CN114115617B CN 114115617 B CN114115617 B CN 114115617B CN 202010880184 A CN202010880184 A CN 202010880184A CN 114115617 B CN114115617 B CN 114115617B
Authority
CN
China
Prior art keywords
user
image
electronic device
makeup
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010880184.0A
Other languages
Chinese (zh)
Other versions
CN114115617A (en
Inventor
高凌云
罗红磊
刘海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010880184.0A priority Critical patent/CN114115617B/en
Priority to PCT/CN2021/108283 priority patent/WO2022042163A1/en
Publication of CN114115617A publication Critical patent/CN114115617A/en
Application granted granted Critical
Publication of CN114115617B publication Critical patent/CN114115617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a display method applied to electronic equipment and the electronic equipment, and relates to the technical field of terminals. The method comprises the following steps: displaying a user face image acquired by a camera in a first area of a display screen of the electronic equipment; a second area of the display screen displays a user face simulated makeup carrying image generated from the user face image; a first input is received and, in response to the first input, a magnified image of at least a portion of the user's facial image is displayed in a first area and a magnified image of at least a portion of the user's facial simulated make-up image is displayed in a second area. Thus, the user can make up with reference to the user's face simulation makeup-carrying image; the face image of the user and the simulated makeup image of the face of the user can be displayed in an enlarged mode, so that the user can conveniently view the image, and the user can conveniently draw the exquisite makeup.

Description

Display method applied to electronic equipment and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a display method applied to electronic equipment and the electronic equipment.
Background
As the functions of mobile phones become more and more powerful, mobile phones are applied to aspects of life of people. For example, the cell phone may act as a mirror. For another example, the mobile phone can serve as a cosmetic box, and display the face-beautifying picture with the makeup for the user in real time, so as to guide the makeup for the user, and the like. When the user uses the mobile phone to assist in makeup, how to promote user experience brings all-round auxiliary guidance for the user, satisfies the diversified demands of the user, and is the direction of our efforts.
Disclosure of Invention
The display method applied to the electronic equipment and the electronic equipment can assist a user in making up in all directions, and the user can conveniently draw exquisite make-up.
In order to achieve the above purpose, the present application adopts the following technical scheme:
in a first aspect, the present application provides a display method applied to an electronic device, where the method may include: displaying a user face image acquired by a camera in a first area of a display screen of the electronic equipment; a second area of the electronic device display screen displays a user face simulated makeup-carrying image generated from the user face image; receiving a first input; in response to the first input, an enlarged image of at least a portion of the user's facial image is displayed in the first area.
In this approach, only the user's facial image may be enlarged to facilitate the user's view of facial details. The face image of the user and the simulated makeup image of the face of the user can be simultaneously enlarged, so that the user can conveniently view the face details and the details of the simulated makeup image, and the user can conveniently compare the face image of the user with the simulated makeup image of the face of the user.
With reference to the first aspect, in one possible design, receiving the first input includes: receiving a first input acting on a first region; or receive a first input that acts on the second region.
With reference to the first aspect, in one possible design, the method further includes: in response to the first input, a magnified image of at least a portion of the user's face simulated make-up image is displayed in the second region. In the method, a user face image and a user face simulated make-up image are simultaneously enlarged in response to a first input. In one possible design, the user facial image magnification action displayed in the first region and the user facial simulated make-up image magnification action displayed in the second region are displayed in synchronization.
With reference to the first aspect, in one possible design, displaying the enlarged image of at least a portion of the user face image in the first area includes: displaying an enlarged image of at least part of the user face image in a first area centered on a center point of the user face image; displaying an enlarged image of at least a portion of the simulated make-up image of the user's face in the second region includes: and displaying an enlarged image of at least part of the user's face simulated make-up image in the second area, centering on the center point of the user's face simulated make-up image.
With reference to the first aspect, in one possible design, if the first input is applied to the first position of the display screen of the electronic device, displaying the first image in the first area and displaying the simulated make-up image of the first image in the second area; the first image is an enlarged image of a portion of the user's facial image; if the first input acts on the second position of the display screen of the electronic equipment, displaying a second image in the first area, and displaying a simulated makeup carrying image of the second image in the second area; the second image is an enlarged image of a portion of the user's facial image; wherein the first location is different from the second location and the first image is different from the second image. That is, the partial user face image and the partial user face simulated make-up image are determined based on a location of the first input on the display screen of the electronic device. And determining an image area which is required to be displayed in an enlarged mode by a user according to the first input, and displaying the image in an enlarged mode by taking the area as the center. In one possible design, as the user's face moves, the centers of the partial user's face image and the partial user's face simulated make-up image do not move.
With reference to the first aspect, in one possible design, the partial user face image corresponds to a partial user face simulated make-up image. That is, the enlarged displayed user face image and the enlarged displayed user face simulated makeup image are the same portion of the user face. Thus, the user can more conveniently compare the facial image of the user with the simulated makeup carrying image of the user's face.
With reference to the first aspect, in one possible design, the user's face simulation makeup image includes first indication information thereon, the first indication information being used to indicate a makeup location and shape; the first indication information is enlarged as the user's face simulated make-up carrying image is enlarged. For example, the first indication information is a dashed box. Therefore, more comprehensive cosmetic guidance can be provided for the user, and the user can make up according to the indication conveniently.
With reference to the first aspect, in one possible design, if an object image that is blocked on the user's face is superimposed on the user's face image, the object image that is blocked on the user's face simulated makeup image does not display the simulated makeup image. This can provide an immersive experience for the user.
With reference to the first aspect, in one possible design, the displaying, in a first area of the display screen of the electronic device, the facial image of the user acquired by the camera includes: a first area of the display screen of the electronic equipment displays a first static image acquired according to the facial image of the user acquired by the camera; the second area of the electronic device display screen displaying a user facial simulated makeup-carrying image generated from the user facial image includes: the second region of the electronic device display screen displays a second still image simulating a vanity formation for the facial image of the user. In the method, the electronic equipment can enlarge and display the user face image and the user face simulated makeup carrying image which are displayed in a solidifying mode, and the user can conveniently look through the electronic equipment.
With reference to the first aspect, in one possible design, the third area of the display screen of the electronic device displays the first information; the electronic equipment receives input operation of a user on first information; the user's face simulated make-up carrying image is changed according to the first information. That is, the electronic device may provide multiple sets of cosmetic parameters, and the user may select different cosmetic parameters to form different simulated makeup-carrying images of the user's face.
In one possible design, the first information is generated from features of the user's facial image. In one possible design, the first information is generated from a picture. In one possible design, the first information is generated from a color cosmetic of the user's facial image. In one possible design, receiving a user modification to a simulated makeup-carrying image of a user's face; and generating first information according to the modified user face simulation makeup image.
In the method, the cosmetic parameters can be preset by the electronic equipment, set according to the user characteristics, or modified by the user.
In a second aspect, the present application provides a display method applied to an electronic device, where the method may include: starting a first application; displaying a face image of a user acquired by the electronic equipment in a first area of a display screen of the electronic equipment; displaying a user face simulation makeup image generated according to the user face image in a second area of the display screen of the electronic device; receiving a first operation; stopping displaying the user face image in response to the first operation; the first object is displayed in a first area, the first object being a still image or a short video acquired from a user face image. In the method, the dynamically displayed facial image of the user can be solidified and displayed as a static picture or a short video, so that the user can conveniently view the facial makeup.
With reference to the second aspect, in one possible design, in response to the first operation, stopping displaying the user's face simulated make-up carrying image; and displaying a second object in a second area, wherein the second object is a static image or a short video acquired according to the facial simulated makeup-carrying image of the user. In the method, the dynamically displayed user face simulation makeup image can be solidified and displayed as a static picture or a short video, so that the user can conveniently check the makeup effect.
In a third aspect, the present application provides an electronic device, comprising: the device comprises a display screen, a camera, an input device and a processor. The camera is used for collecting facial images of the user; the first area of the display screen is used for displaying the facial image of the user acquired by the camera; the second area of the display screen is used for displaying a user face simulation makeup-carrying image generated according to the user face image; the input device is used for receiving a first input; the processor is configured to control the first region of the display screen to display an enlarged image of at least a portion of the user's facial image in response to the first input.
With reference to the third aspect, in one possible design, the input device receiving the first input includes: the input device receives a first input acting on the first region; or the input device receives a first input that acts on the second region.
With reference to the third aspect, in one possible design, the processor is further configured to control the second area of the display screen to display an enlarged image of at least a portion of the user's face simulated make-up carrying image in response to the first input.
With reference to the third aspect, in one possible design, the processor is specifically configured to: controlling a first area of the display screen to display an enlarged image of at least part of the user face image with a center point of the user face image as a center; the second area of the control display screen is centered on a center point of the user's face simulated make-up image and displays an enlarged image of at least a portion of the user's face simulated make-up image.
With reference to the third aspect, in one possible design, the processor is further configured to control the second area of the display screen to display the first image if it is determined that the first input is applied to the first position of the display screen; if the first input is determined to act on the second position of the display screen of the electronic device, controlling a second area of the display screen to display a second image; wherein the first location is different from the second location and the first image is different from the second image.
In one possible design, the partial user face image corresponds to a partial user face simulated makeup image.
With reference to the third aspect, in one possible design, the user's face simulation makeup image includes first indication information thereon, the first indication information being used to indicate a makeup location and shape; the first indication information is enlarged as the user's face simulated make-up carrying image is enlarged.
With reference to the third aspect, in one possible design, the processor is further configured to: acquiring a first static image according to a user face image acquired by a camera; simulating makeup on the facial image of the user to form a second static image; the display screen is also used for: displaying a first still image in a first area; and displaying the second static image in a second area.
With reference to the third aspect, in one possible design, a third area of the display screen is used to display the first information; the input device is also used for receiving the input operation of the user on the first information; the processor is also configured to change the simulated makeup image of the user's face based on the first information.
With reference to the third aspect, in one possible design, the processor is further configured to generate the first information according to a feature of the user face image.
With reference to the third aspect, in one possible design, the input device is further configured to receive a modification operation of the user on the face-simulated makeup-carrying image of the user; the processor is further configured to generate first information from the modified user face simulated makeup image.
In a fourth aspect, the present application provides a method for displaying a graphical user interface, the method comprising: the electronic device displays a first graphical user interface GUI; the first area of the first GUI comprises a user face image acquired by a camera; the second region of the first GUI includes a user facial simulated makeup-carrying image generated from the user facial image; in response to receiving the first input, the electronic device displays a second GUI; the first region of the second GUI includes a magnified image of at least a portion of the user's facial image; the first area of the second GUI and the first area of the first GUI are the same display area on the display screen.
With reference to the fourth aspect, in one possible design, the second area of the second GUI includes a magnified image of at least a portion of the user's face simulated make-up image; the second area of the second GUI and the second area of the first GUI are the same display area on the display screen.
With reference to the fourth aspect, in one possible design, a center point of the user face image coincides with a center point of a part of the user face image; the center point of the user's face simulated make-up image coincides with the center point of a portion of the user's face simulated make-up image.
With reference to the fourth aspect, in one possible design, if the first input is applied to the first position of the display screen of the electronic device, the first area of the second GUI displays a first image, and the second area of the second GUI displays a simulated makeup-carrying image of the first image; the first image is an enlarged image of a portion of the user face image; if the first input acts on the second position of the display screen of the electronic device, displaying a second image in the first area of the second GUI, and displaying a simulated makeup carrying image of the second image in the second area of the second GUI; the second image is an enlarged image of a portion of the user's facial image; wherein the first location is different from the second location and the first image is different from the second image.
With reference to the fourth aspect, in one possible design, the partial user face image corresponds to a partial user face simulated make-up image.
With reference to the fourth aspect, in one possible design, the second area of the first GUI further includes first indication information, where the first indication information is used to indicate a cosmetic location and shape; the second area of the second GUI further includes first indication information displayed in enlarged form as the user's face simulates a vanity image being enlarged.
With reference to the fourth aspect, in one possible design, if the first area of the first GUI includes an object image that is blocked from the face of the user, the object image blocked from the face of the user in the second area of the first GUI does not display the simulated make-up image.
With reference to the fourth aspect, in one possible design, the first area of the first GUI includes a first still image acquired from a facial image of the user acquired by the camera; the second region of the first GUI includes a second static image that simulates a make-up formation for the user's facial image.
With reference to the fourth aspect, in one possible design, the third area of the first GUI includes the first information, and the method further includes: receiving input operation of a user on first information; responding to the input operation of the user on the first information, and displaying a third GUI by the electronic equipment; the second area of the third GUI includes a simulated makeup-carrying image of the user's face formed from the first information; the second area of the third GUI and the second area of the first GUI are the same display area on the display screen.
In a fifth aspect, the present application provides an electronic device, where the electronic device may implement the display method applied to the electronic device described in any one of the first aspect and the second aspect and possible designs thereof, and the method may be implemented by software, hardware, or by executing corresponding software by hardware. In one possible design, the electronic device may include a processor and a memory. The processor is configured to support the electronic device to perform the respective functions of any of the above-described first and second aspects and possible designs thereof. The memory is used to couple with the processor, which holds the program instructions and data necessary for the electronic device.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium, which includes computer instructions that, when executed on an electronic device, cause the electronic device to perform a display method applied to the electronic device as described in any one of the first aspect and the second aspect and possible designs thereof.
In a seventh aspect, embodiments of the present application provide a computer program product, which when run on a computer, causes the computer to perform the display method applied to an electronic device as described in any one of the first aspect and the second aspect and possible designs thereof.
The technical effects of the electronic device according to the third aspect, the GUI display method according to the fourth aspect, the electronic device according to the fifth aspect, the computer-readable storage medium according to the sixth aspect, and the computer program product according to the seventh aspect may be referred to the technical effects of the corresponding methods described above, and will not be described herein.
Drawings
Fig. 1 is a schematic view of a scenario example of a display method applied to an electronic device according to an embodiment of the present application;
fig. 2 is a schematic view of a scenario example of a display method applied to an electronic device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a schematic software architecture of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 is a schematic view of a scenario example of a display method applied to an electronic device according to an embodiment of the present application;
fig. 7A to fig. 7D are schematic diagrams of a scenario example of a display method applied to an electronic device according to an embodiment of the present application;
fig. 8 is a schematic flow chart of a display method applied to an electronic device according to an embodiment of the present application;
Fig. 9 is a schematic view of a scenario example of a display method applied to an electronic device according to an embodiment of the present application;
fig. 10A is a schematic flow chart of a display method applied to an electronic device according to an embodiment of the present application;
fig. 10B to fig. 10D are schematic diagrams of a scenario example of a display method applied to an electronic device according to an embodiment of the present application;
fig. 11A is a schematic flow chart of a display method applied to an electronic device according to an embodiment of the present application;
fig. 11B to 11D are schematic diagrams of a scenario example of a display method applied to an electronic device according to an embodiment of the present application;
fig. 12 to fig. 18 are schematic diagrams of a scenario example of a display method applied to an electronic device according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in the various embodiments herein below, "at least one", "one or more" means one, two or more than two. The term "and/or" is used to describe an association relationship of associated objects, meaning that there may be three relationships; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless stated otherwise.
The embodiment of the application provides a display method applied to electronic equipment, which can display a makeup picture of a user, guide makeup of the user, provide a makeup auxiliary function and help the user to finish makeup. The electronic device may include a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a handheld computer, a netbook, a personal digital assistant (personal digital assistant, PDA), a wearable device, a virtual reality device, and the like, which is not limited in any way by the embodiments of the present application.
Taking the mobile phone 100 as the electronic device as an example. Referring to fig. 1, a display interface of a mobile phone 100 may include a first display content 11 and a second display content 12. The first display content 11 is used for displaying a face image of a user acquired by a mobile phone camera; the second display content 12 is used for displaying the facial makeup image of the user, which is obtained by superposing the facial image of the user acquired by the camera and simulates the makeup effect. The user can make up with reference to the second display content 12, and can also correct the makeup of the user's face in real time by comparing the first display content 11 (user's face image) with the second display content 12 (user's face makeup image). As shown in fig. 1, the embodiment of the present application does not limit the positions at which the first display content 11 and the second display content 12 are displayed on the display screen (also referred to as a screen in the present application) of the mobile phone 100. In some embodiments, the display interface of the mobile phone 100 further includes a third display 13. The third display content 13 is used for displaying makeup assistance information. In some examples, the cosmetic auxiliary information may include a make-up pad 131, the make-up pad 131 including cosmetic parameters. The user may select different cosmetic parameters to cause the second display 12 (the user's facial make-up image) to present a corresponding effect. The makeup assistance information may further include makeup step guidance information 132, the makeup step guidance information 132 indicating a makeup step. In this embodiment, the display area of the first display content 11 on the display screen is referred to as a first display frame, and the display area of the second display content 12 on the display screen is referred to as a second display frame.
The electronic device may also be a mobile phone 200 with a folding screen. The handset 200 may be folded along one or more folding axes for use. As shown in fig. 2, the screen (folding screen) of the mobile phone 200 is divided into a first display area 21 and a second display area 22 along a folding axis, and an included angle α is formed between the first display area 21 and the second display area 22, where α is in a range of 0 ° to 180 °. It will be appreciated that the first display area 21 and the second display area 22 are different areas belonging to the same folding screen. In some embodiments, the first display content 11 and the second display content 12 are displayed in a first display area 21 and a second display area 22, respectively; in other embodiments, both the first display content 11 and the second display content 12 are displayed in the first display area 21. In some embodiments, the first display area 21 displays the first display content 11 and the second display content 12; the second display area 22 displays the third display content 13.
According to the display method applied to the electronic equipment, the display screen of the electronic equipment can display the first display content 11 and/or the second display content 12. The user can view the current makeup progress in real time through the first display contents 11, can see the complete makeup-carrying image through the second display contents 12, and can also compare the first display contents 11 (user face image) with the second display contents 12 (makeup-carrying image). The user can also make up according to the makeup step guidance information of the third display content 13; cosmetic parameters may also be selected to modify the image effects of the second display content 12. Therefore, the multifunctional cosmetic pencil brings omnibearing cosmetic assistance for the user, and helps the user to draw an exquisite cosmetic.
Referring to fig. 3, a schematic structural diagram of an electronic device 300 is shown.
The electronic device 300 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 300. In other embodiments of the present application, electronic device 300 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 300, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180L, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180L through an I2C interface, such that the processor 110 communicates with the touch sensor 180L through an I2C bus interface to implement the touch function of the electronic device 300.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 300. The processor 110 and the display screen 194 communicate via a DSI interface to implement the display functionality of the electronic device 300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 300, or may be used to transfer data between the electronic device 300 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 300. In other embodiments of the present application, the electronic device 300 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 300. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 300 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 300. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied on the electronic device 300. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 300 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 300 may communicate with a network and other devices via wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (beidou navigation satellite system, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 300 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1. In an embodiment of the present application, a display and a Touch Panel (TP) may be included in the display 194. The display is used to output display content to a user and the touch device is used to receive touch events entered by the user on the display screen 194. In the embodiment of the present application, the display screen 194 may be used to display the first display content 11, the second display content 12, the third display content 13, and so on.
The electronic device 300 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 300 may include 1 or N cameras 193, N being a positive integer greater than 1. In one example, camera 193 may include a wide angle camera, a photographic camera, a 3D deep-sensing camera (e.g., a structured light camera, a time-of-flight (ToF) camera), a tele camera, etc. In some embodiments, camera 193 may include a front camera and a rear camera. In embodiments of the present application, a camera 193 (e.g., a front-facing camera) may be used to capture images of the user's face.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 300 is selecting a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. Thus, the electronic device 300 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the electronic device 300 may be implemented by the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 300. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 300 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 300 may implement audio functions through the audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, and application processor, etc. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 300 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the electronic device 300 is answering a telephone call or voice message, the voice can be received by placing the receiver 170B close to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 300 may be provided with at least one microphone 170C. In other embodiments, the electronic device 300 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 300 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 300 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 300 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 300 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The air pressure sensor 180B is used to measure air pressure. In some embodiments, the electronic device 300 calculates altitude from barometric pressure values measured by the barometric pressure sensor 180B, aiding in positioning and navigation.
The gyro sensor 180C may be used to determine a motion gesture of the electronic device 300. In some embodiments, the angular velocity of electronic device 300 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180C. The gyro sensor 180C may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180C detects the shake angle of the electronic device 300, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 300 through the reverse motion, thereby realizing anti-shake. The gyro sensor 180C may also be used for navigation, somatosensory game scenes.
The acceleration sensor 180D may detect the magnitude of acceleration of the electronic device 300 in various directions (typically three axes). The magnitude and direction of the gravitational acceleration may be detected when the electronic device 300 is stationary.
The magnetic sensor 180E includes a hall sensor, magnetometer, or the like. The Hall sensor can detect the magnetic field direction; magnetometers are used to measure the magnitude and direction of magnetic fields. The magnetometer may measure the ambient magnetic field strength, e.g. the magnetic field strength may be measured with the magnetometer to obtain azimuth information of the carrier of the magnetometer.
The touch device 180F may be used to detect a touch position of a user. In some embodiments, a touch point of the user at the electronic device 300 may be detected by the touch device 180F, and then a grip gesture of the user may be determined using a preset grip algorithm according to the touch position.
A distance sensor 180G for measuring a distance. The electronic device 300 may measure the distance by infrared or laser. In some embodiments, the electronic device 300 may range using the distance sensor 180G to achieve fast focus.
The proximity light sensor 180H may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 300 emits infrared light outward through the light emitting diode. The electronic device 300 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that an object is in the vicinity of the electronic device 300. When insufficient reflected light is detected, the electronic device 300 may determine that there is no object in the vicinity of the electronic device 300. The electronic device 300 can detect that the user holds the electronic device 300 close to the ear to talk by using the proximity light sensor 180H, so as to automatically extinguish the screen to achieve the purpose of saving electricity. The proximity light sensor 180H may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180M is used to sense ambient light level. The electronic device 300 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180M may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180M may also cooperate with proximity light sensor 180H to detect whether electronic device 300 is in a pocket to prevent false touches.
The fingerprint sensor 180J is used to collect a fingerprint. The electronic device 300 can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access an application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The temperature sensor 180K is used to detect temperature. In some embodiments, the electronic device 300 performs a temperature processing strategy using the temperature detected by the temperature sensor 180K. For example, when the temperature reported by temperature sensor 180K exceeds a threshold, electronic device 300 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180K in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 300 heats the battery 142 to avoid the low temperature causing the electronic device 300 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 300 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180L is also referred to as a "touch panel". The touch sensor 180L may be disposed on the display 194, and the touch sensor 180L and the display 194 form a touch screen, which is also referred to as a "touch screen". The touch sensor 180L is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180L may also be disposed on the surface of the electronic device 300 at a different location than the display 194.
The bone conduction sensor 180Q may acquire a vibration signal. In some embodiments, bone conduction sensor 180Q may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180Q may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180Q may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may parse out a voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180Q, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180Q, so as to realize a heart rate detection function.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 300 may receive key inputs, generating signal inputs related to user settings and function controls of the electronic device 300.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 300. The electronic device 300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 300 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 300 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300.
The software system of the electronic device 300 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application exemplifies a hierarchical architecture, and illustrates a software structure of the electronic device 300. Fig. 4 is a software architecture block diagram of an electronic device 300 provided in an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the electronic device 300 may include an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer.
The application layer may include a series of applications.
As shown in fig. 4, the application layer may include applications such as smart cosmetic cases, makeup area layout displays, makeup pan layout displays, makeup mode switching controls, curing and magnification controls, vanity mirror light-supplementing controls, extraction makeup controls, and the like. Wherein, intelligent cosmetic box is used for providing supplementary makeup function. The makeup pan layout displays layout control of makeup auxiliary information for the intelligent makeup box on the display screen. The makeup extraction control is used for analyzing the collected facial images of the user to obtain makeup parameters. The makeup area layout display is used for carrying out simulated makeup treatment on the facial image of the user to form a makeup-carrying facial image of the user. The makeup mode switching control is used for controlling the electronic device to switch the display mode. The solidification and amplification control module is used for carrying out image solidification display on dynamic display content of a display interface of the cosmetic application; and the electronic equipment is also used for controlling the electronic equipment to enlarge and display the display content of the cosmetic application display interface. The cosmetic mirror light supplementing control is used for controlling the display of the light supplementing effect in the peripheral area of the display screen of the electronic equipment.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 4, the application framework layer may include an event distribution management service, a power management service (power manager service, PMS), a timing management service (alarm manager service, AMS), a window management service (window manager service, WMS), a content provider, a view system, a phone manager, a resource manager, a notification manager, etc., to which the embodiments of the present application do not impose any limitation.
Wherein the event distribution management service is used for distributing events to the application program. The power management service may be used to control the lighting up or extinguishing of the screen of the electronic device 300. The timing management service is used to manage timers, alarms, etc. The window management service is used to manage window programs. The window management service may obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc. The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture. The telephony manager is used to provide the communication functions of the electronic device 300. Such as the management of call status (including on, hung-up, etc.). The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: sensor services, surface manager (surface manager), media library (Media Libraries), gesture recognition engine, face recognition engine, graphics processing engine, graphics tracking engine, natural language recognition engine, etc.
The sensor service is used to store and process sensor related data. For example, output data of each sensor is provided, and fusion processing or the like is performed on the output data of a plurality of sensors. The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc. The gesture recognition engine is used for processing gesture recognition related processes. The face recognition engine is used for processing face recognition related data and flow. The graphic processing engine is a drawing engine for graphics, image processing, and the like. The graph tracking engine is used for processing the graph and image tracking related flow. The natural language recognition engine is used for supporting the processes of voice recognition, semantic recognition and the like.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
In the embodiment of the application, the audio driver is used for uploading the voice received by the microphone of the electronic device to the natural language recognition engine. The natural language recognition engine performs semantic recognition and determines the semantics of the acquired voice; and generating corresponding events according to the semantics of the voice, and reporting the events to an event distribution management service. For example, if the natural language recognition engine determines that the voice has the meaning of "switch makeup mode", a mode switching event is generated; determining the semantics of the voice as 'fixed', and generating a solidification event; determining the semantics of the voice as 'amplification', and generating a central amplification event; and determining the semantics of the voice as 'amplified mouth', and generating a tracking and amplifying event.
The camera driver is used for uploading the facial image of the user acquired by the camera of the electronic device to the face recognition engine. The face recognition engine recognizes the expression of the facial image of the user; and generating a corresponding event according to the expression of the facial image of the user, and reporting the event to an event distribution management service. For example, the face recognition engine determines that the acquired expression is "smiling face", and generates a curing event.
The camera driver may also be used to upload the user hand image captured by the camera of the electronic device to the gesture recognition engine. The gesture recognition engine recognizes gestures corresponding to the hand images of the user; and generating a corresponding event according to the gesture of the hand image of the user, and reporting the event to the event distribution management service. For example, if the gesture recognition engine determines that the acquired gesture is a 'heart comparing gesture', a curing event is generated; if the acquisition gesture is determined to be a zoom gesture, a center magnification event is generated.
The sensor of the electronic device detects an operation of the user on the screen, and the sensor service is notified by the sensor driver. The sensor service determines the operation gesture of the user on the screen, generates a corresponding event according to the current process, and reports the event to the event distribution management service. For example, the sensor service determines that a rightward sliding operation of the user on the screen is detected, and generates a mode switching event; determining that the action of knocking the screen by the user is detected, and generating a center amplifying event; determining that the action of clicking the amplifying button by the user is detected, and generating a central amplifying event; and determining that the user double-finger expansion operation is detected, and generating a tracking amplification event.
The event distribution management service distributes each event to a corresponding module of the application program layer. For example, distributing a mode switch event to a make-up mode switch control module; the curing event, center zoom event, track zoom event, etc. are distributed to the curing and zoom control module.
The display method applied to the electronic device provided in the embodiment of the application will be described in detail below with reference to the accompanying drawings. It should be noted that, the embodiment of the application is described by taking an electronic device with a folding screen as an example, and it can be understood that the display method applied to the electronic device provided by the application is also applicable to an electronic device with a non-foldable screen.
An electronic device having a folding screen may be folded along one or more folding axes. Referring to fig. 5 (a), the electronic device is in an unfolded state. The user can fold the electronic device shown in fig. 5 (a) along the folding axis of the electronic device. As shown in fig. 5 (b), after the user folds the electronic device along the folding axis AB, the screen (folding screen) of the electronic device is divided into two display areas along the folding axis AB, namely, a first display area 21 and a second display area 22. In the embodiment of the present application, the first display area 21 and the second display area 22 formed after folding may be displayed as two independent areas. For example, the display interface of the first display area 21 may be referred to as a first display interface, and the display interface of the second display area 22 may be referred to as a second display interface; namely, the display area of the folding screen where the first display interface is located is a first display area 21; the display area of the folding screen where the second display interface is located is a second display area 22. The first display interface and the second display interface are respectively positioned at two sides of the folding shaft. The areas of the first display area 21 and the second display area 22 may be the same or different. The first display area 21 and the second display area 22 may form an included angle α, as shown in fig. 5 (b), and the electronic device is in a bent state. The user may continue to fold the electronic device along the folding axis AB, as shown in fig. 5 (c), with the electronic device in a folded state.
It will be appreciated that the first display area 21 and the second display area 22 may have other names, for example, the first display area 21 is referred to as an a-screen of the electronic device, and the second display area 22 is referred to as a B-screen of the electronic device; alternatively, the first display area 21 is referred to as an upper screen of the electronic device, and the second display area 22 is referred to as a lower screen of the electronic device; the embodiments of the present application are not limited in this regard. In the embodiment of the present application, the first display area 21 of the folding screen is referred to as a first screen, and the second display area 22 of the folding screen is referred to as a second screen; it will be appreciated that the first screen and the second screen are distinct regions belonging to the same folding screen.
It will be appreciated that the angle α between the first screen 21 and the second screen 22 may lie in the interval 0 ° to 180 °. In some embodiments, when the angle α between the first screen 21 and the second screen 22 is less than the first threshold value and greater than the second threshold value, the electronic device may hover independently on the plane without requiring a stand or a user's hand. Thus, the hands of the user can be liberated, and the user can conveniently operate the electronic equipment. In one example, the first threshold may be 150 ° and the second threshold may be 60 °; in another example, the first threshold may be 120 ° and the second threshold may be 60 °; in another example, the first threshold may be 120 ° and the second threshold may be 30 °. The embodiments of the present application are not limited in this regard.
In some examples, the user expands the electronic device such that the value of α increases to a first angle (the first angle is less than a first threshold and greater than a second threshold) and the electronic device remains at the first angle for a first duration (e.g., 1 s); or the user bends the electronic device such that the value of α decreases to a first angle and the electronic device remains at the first angle for a first period of time; the display screen of the electronic device displays a shortcut menu. In one example, the shortcut menu includes a shortcut icon of a cosmetic application (such as a smart cosmetic case). The user may open the cosmetic application by clicking on a shortcut icon of the cosmetic application. Illustratively, as shown in FIG. 6, a shortcut menu 602 is displayed on a desktop 601 of the electronic device, the shortcut menu 602 including a "smart cosmetic box" icon 603. The user may open the cosmetic application by clicking on the "smart cosmetic case" icon 603. Optionally, after the desktop 601 pops up the shortcut menu 602, other icons originally displayed on the desktop 601 are displayed in a virtual manner. Illustratively, the electronic device receives a user click on the "smart cosmetic case" icon 603, and in response to clicking on the "smart cosmetic case" icon 603, the electronic device displays a display interface of the cosmetic application.
In some embodiments, the display interface of the cosmetic application may include at least one of the first display content 11 and the second display content 12.
In one example, as shown in fig. 7A (a), the display interface of the cosmetic application includes a first display content 11, the first display content 11 being displayed on a first screen 21 of the display screen. In one implementation, a front-facing camera of the electronic device captures an image of a user's face and displays the captured image within a first screen 21 of the display screen. In the examples herein, this display mode is referred to as mirror mode for cosmetic applications.
In another example, as shown in fig. 7A (b), the display interface of the cosmetic application includes the second display content 12, and the second display content 12 is displayed on the first screen 21 of the display screen. In one implementation, referring to fig. 4, a front-facing camera of an electronic device captures a user's facial image, which is transmitted to a graphics processing engine through a camera driver. The graphic processing engine adopts an augmented reality technology algorithm to simulate makeup on the face image of the user acquired by the camera, and forms a simulated makeup-carrying image of the face of the user. The smart cosmetic case module controls the display of the user's facial simulated make-up-carrying image within the first screen 21 of the display screen. In the embodiments of the present application, this display mode is referred to as a make-up mode of the make-up application.
In another example, as shown in (c) of fig. 7A, the display interface of the cosmetic application includes a first display content 11 and a second display content 12, the first display content 11 and the second display content 12 being displayed on a first screen 21 of the display screen. In one implementation, the intelligent cosmetic case module controls the synchronous split-screen display of the user facial image captured by the camera and the user facial simulated makeup-carrying image formed by the graphics processing engine in the first screen 21 of the display screen. In the examples herein, this display mode is referred to as a hybrid mode of cosmetic application.
In some examples, the mirror mode, make-up mode, and blending mode may be switched to each other. And when the electronic equipment receives the mode switching operation, switching a display mode. The mode switching operation includes, for example, a finger sliding operation on the display screen. Every time the electronic device detects a finger sliding (such as a single-direction right sliding) operation, a display mode is switched according to the sequence of a mirror mode, a makeup mode and a mixed mode. Exemplary, the mode switching operation includes a first voice. For example, when the electronic device detects that the voice is "switch makeup mode", the electronic device switches to makeup mode for display. It will be appreciated that the mode switch operation may also include other forms, such as clicking a switch button, a mode switch gesture, etc., as embodiments of the present application are not limited in this regard.
In some embodiments, the first display content 11 includes an image of a user's face and an object (e.g., a cosmetic tool, a human hand, etc.) that is occluded in the user's face, that is, the image captured by the camera includes an image of a user's face and an object that is occluded in the user's face. In one implementation, an electronic device (e.g., the make-up area layout display module of fig. 4) uses an image recognition algorithm to calculate images captured by a camera to obtain facial images (e.g., eyebrows, eyes, nose, mouth, facial contours, etc.) and occluding objects (e.g., hands, make-up tools, etc.) of a user. The electronic device performs a simulated makeup treatment on the portion of the image acquired by the camera that is not blocked by the blocking object by using an augmented reality algorithm, so as to form a second display content 12. In the second display content 12, a portion of the user's face that is not blocked by the blocking object displays the simulated makeup-carrying image, and a portion blocked by the blocking object displays the blocking object (the portion does not include the simulated makeup effect). Therefore, the simulated makeup carrying effect cannot appear on the shielding object in the simulated makeup carrying image displayed by the electronic equipment, and immersive makeup display experience is brought to the user.
Illustratively, as shown in fig. 7B, the user's facial image in the first display frame is partially obscured by lipstick and human hands. The electronic equipment performs simulated makeup processing on the part, which is not blocked by lipstick and human hands, of the facial image of the user to form a simulated makeup-carrying image of the face of the user. As shown in fig. 7B, the user's face in the second display frame simulates the mouth portion of the makeup-carrying image, and the simulated makeup-carrying effect is displayed only in the portion that is not blocked.
In some embodiments, a dashed box may be superimposed on the simulated make-up image of the user's face to indicate the make-up location and shape, guiding the user through the make-up. In one example, the dashed box may move as the user's face simulates movement of a vanity image such that the dashed box is always located at the indicated location. In one implementation, when the electronic device displays the makeup step guidance information 132, a dashed box is displayed at a corresponding location on the simulated makeup-carrying image of the user's face according to the location indicated by the makeup step guidance information 132. For example, as shown in fig. 7C, the makeup location indicated by the makeup step guidance information 132 is the mouth, and a dotted line frame 701 is superimposed on the corresponding mouth area of the user's face simulated makeup-carrying image displayed in the second display frame. The dashed box 701 is used to indicate the position and shape of the make-up for the mouth. In another implementation, the display interface of the cosmetic application includes an analog button for controlling the opening or closing of the display dashed box function. For example, the user clicks the simulation button, so that the simulation button is in a pressed state, the function of displaying the dotted frame is opened, and the electronic device displays the dotted frame at a corresponding position on the user's face simulation makeup-carrying image according to the position indicated by the makeup step guidance information 132. The user clicks the simulation button to enable the simulation button to be in a bouncing state, the function of displaying the broken line frame is closed, and the broken line frame is not displayed on the facial simulation makeup carrying image of the user. Illustratively, as shown in fig. 7D, the cosmetic site indicated by the cosmetic step guidance information 132 is the mouth. The display interface of the electronic device includes an "image guidance" button 702, and if the electronic device receives an operation of pressing the "image guidance" button 702 by the user, the user's face displayed in the second display frame simulates the mouth region of the makeup-carrying image, and a dotted line frame 701 is superimposed.
The embodiment of the application provides a display method applied to electronic equipment, when the electronic equipment displays a display interface of a cosmetic application, dynamic display content of the display interface can be subjected to image solidification display, and the display method is convenient for a user to check. As shown in fig. 8, the method may include:
s801, the electronic device displays a first interface of the cosmetic application, the first interface including at least one of the first display content 11 and the second display content 12.
In one example, the display mode of the cosmetic application is a mirror mode; the first interface comprises a first display content 11. Illustratively, as shown in fig. 7A (a), the first screen 21 of the electronic device displays the user's face image captured by the camera.
In another example, the display mode of the cosmetic application is a make-up mode; the first interface includes second display content 12. Illustratively, as shown in fig. 7A (b), the first screen 21 of the electronic device displays a user's face simulated make-up image.
In another example, the display mode of the cosmetic application is a mixed mode; the first interface comprises a first display content 11 and a second display content 12. Illustratively, as shown in fig. 7A (c), the first screen 21 of the electronic device displays the user face image and the user face-simulated makeup-carrying image in left and right split screens.
It will be appreciated that in the examples above, the second screen 22 of the electronic device may not display any content (such as a screen off), or the second screen 22 of the electronic device may display the third display content 13 (makeup assistance information), or the second screen 22 of the electronic device may display other content, which is not limited in this embodiment.
In another example, the first screen 21 and the second screen 22 of the electronic device together display a first interface. For example, a first screen 21 of the electronic device displays the first display content 11 and a second screen 22 displays the second display content 12; or the first screen 21 displays the second display content 12, the second screen 22 displays the first display content 11, and so on.
In other examples, the second screen 22 of the electronic device displays the first interface of the cosmetic application, the first screen 21 of the electronic device may not display any content (such as a screen off), or the first screen 21 of the electronic device may display the third display content 13 (makeup assistance information), or the first screen 21 of the electronic device may display other content, which is not limited in this embodiment. And will not be described in detail herein.
S802, the electronic equipment receives a first operation.
The first operation may include: speech (e.g., speech "stationary"), gestures (e.g., an "OK gesture", "a heart-beat gesture"), expressions (e.g., a smiling face), click button operations, tap screen operations, and the like.
S803, in response to receiving the first operation, the electronic device displays a first object in the first display frame and displays a second object in the second display frame.
In one implementation, the electronic device receives a first operation of the user, delays for a first duration (e.g., 3 seconds), and obtains the first object and/or the second object. In this way, the user can be prevented from being influenced to view the makeup by the gestures, expressions and the like corresponding to the first operation on the interface displayed by the image solidification. In some examples, the electronic device obtains a first object from the first display content 11 and a second object from the second display content 12 after receiving a first operation by the user and delaying for a first period of time. The electronic equipment stops displaying the first display content in the first display frame and displays the first object in the first display frame; the electronic device stops displaying the second display content in the second display frame and displays the second object in the second display frame. The first display frame is a display area of the first display content 11 on the display screen, and the second display frame is a display area of the second display content 12 on the display screen.
The first object and the second object may be still images or short videos.
In one example, the first object and the second object are static images. The electronic device intercepts a current frame of a facial image of a user acquired by a camera as a first object at a first moment (after receiving a first operation of the user and delaying for a first duration). The electronic device intercepts a current frame of the user's face simulated makeup image as a second object at a first time (after receiving a first operation by the user, delaying for a first duration). Further, the user may save the first object and the second object as pictures.
In another example, the first object and the second object are short videos. The electronic device intercepts a current frame and a later t (t > 0) frame of a facial image of a user acquired by a camera as a first object at a first moment (after receiving a first operation of the user and delaying for a first duration). The electronic device intercepts a current frame and a later t (t > 0) frame of the face simulation makeup image of the user as a second object at a first time (after receiving a first operation of the user and delaying the first time period). Further, the user may save the first object and the second object as video.
In some embodiments, such as in fig. 7A (c), the display mode of the cosmetic application is a hybrid mode, and the first interface includes a first display content 11 and a second display content 12.
In one implementation, the electronic device, in response to receiving the first operation, obtains a first object from the first display content 11, obtains a second object from the second display content 12, and displays the first object and the second object (the interface of the image curing display includes the first object and the second object); the face image of the user and the simulated makeup image of the face of the user are subjected to image solidification display; the user can conveniently compare the facial image of the user with the facial simulated makeup carrying image of the user, and the makeup steps can be adjusted. Further, the first object and the second object may be saved as one picture. Illustratively, as shown in FIG. 9, the electronic device displays an interface 901. The interface 901 includes a first object 902 and a second object 903. The interface 901 further comprises prompt information 904 for prompting the user to save the first object 902 and the second object 903. Optionally, interface 901 may also include a "ok" button 905 and a "cancel" button 906; the "ok" button 905 is used to determine that the first object 902 and the second object 903 are saved as pictures, and the "cancel" button 906 is used to determine that the first object 902 and the second object 903 are not saved.
In another implementation, the electronic device obtains a first object from the first display content 11 and displays the first object and the second display content 12 in response to receiving the first operation; namely, only the facial image of the user is subjected to image solidification display; the user can easily look at the current face makeup effect. Further, the first object may be saved as a picture.
In another implementation, the electronic device obtains a second object from the second display content 12 and displays the first display content 11 and the second object in response to receiving the first operation; namely, only carrying out image solidification display on the facial simulation makeup carrying image of the user; the user can conveniently and carefully check the simulated cosmetic effect. Further, the second object may be saved as a picture.
Optionally, a corresponding image solidification display mode can be realized according to user selection. For example, when a 'heart comparing gesture' of a user is received, the electronic equipment performs image solidification display on both the facial image of the user and the simulated makeup carrying image of the user; receiving double-click operation of a user in a first display frame, and then the electronic equipment only performs image solidification display on the facial image of the user; and after receiving the double-click operation of the user in the second display frame, the electronic equipment only performs image solidification display on the facial simulated makeup-carrying image of the user.
Further, the user can view and edit the saved pictures or videos. For example, the user can perform operations of enlarging, rotating, cropping, adjusting colors, and the like on the saved picture. For another example, the user may save one or more images in the saved video as pictures. In some examples, the user may also share the saved picture or video into a social application.
The embodiment of the application provides a display method applied to electronic equipment, when the electronic equipment displays a display interface of a cosmetic application, the display content of the display interface can be enlarged and displayed; so as to be convenient for the user to view.
In one example, the electronic device enlarges the display content in a center-enlarged manner. In this way, the user's face center can be kept at the display center. As shown in fig. 10A, the method includes:
s1001, the electronic device displays a first interface of the cosmetic application.
The electronic device displays a first interface of the cosmetic application, the first interface comprising at least one of a first display content 11, a first object, a second display content 12, and a second object. The first display content 11 or the first object is displayed in the first display frame, and the second display content 12 or the second object is displayed in the second display frame.
That is, the electronic device may enlarge and display a dynamic user face image or a user face simulated makeup-carrying image. The display content (first object or second object) of the image curing display may be displayed in an enlarged manner.
S1002, the electronic device receives a first input.
The first input may include: speech (e.g., speech "zoom in"), tapping a screen (e.g., single click, double click), clicking a button, etc.
S1003, in response to receiving the first input, the electronic device enlarges and displays the display content of the first interface in a center enlarged manner.
Optionally, the first interface comprises the first display content 11. In response to receiving the first input, the electronic device displays the enlarged first display content 11 within the first display frame centered around a center point of the first display content 11.
Optionally, the first interface includes the second display content 12. In response to receiving the first input, the electronic device displays the enlarged second display content 12 within the second display frame centered about the center point of the second display content 12.
Optionally, the first interface comprises a first object. In response to receiving the first input, the electronic device displays the magnified first object within the first display frame centered about a center point of the first object.
Optionally, the first interface comprises a second object. In response to receiving the first input, the electronic device displays the enlarged second object within the second display frame centered about a center point of the second object.
In the case where the display mode of the cosmetic application is a hybrid mode (i.e., the first interface includes a first display frame and a second display frame), in one implementation, in response to receiving a first input (the first input may be applied within the first display frame or the second display frame, or the first input may be applied to a display area outside of the first display frame and the second display frame), the electronic device simultaneously enlarges display content within the first display frame and the second display frame. For example, referring to fig. 10B, a first screen 21 of the electronic device display displays a first display content 11 (user face image) and a second display content 12 (user face simulated make-up image). The electronic apparatus receives an operation of clicking and dragging the button 100a rightward by the user, and in response to the operation of clicking and dragging the button 100a rightward, the electronic apparatus enlarges and displays the user face image and the user face-simulated makeup-carrying image in a center-enlarged manner. In another implementation, the electronic device only enlarges display content within the first display frame or the second display frame in response to receiving the first input (the first input may be within the first display frame or the second display frame, or the first input may also be within a display area outside of the first display frame and the second display frame). Illustratively, as shown in fig. 10C, a first screen 21 of the electronic device display screen displays a first display content 11 (user face image) and a second display content 12 (user face simulated make-up image). The electronic device receives an operation of clicking a screen by a user (an area of the clicking screen is located in the first display frame), and in response to the operation of clicking the screen, the electronic device enlarges and displays the face image of the user in a central enlarged manner, and the display manner of the face simulation makeup image of the user is unchanged. Illustratively, as shown in fig. 10D, a first screen 21 of the electronic device display displays a first display content 11 (user face image) and a second display content 12 (user face simulated make-up image). The electronic device receives an operation of clicking the screen by the user (the area of the clicking screen is located in the second display frame), and in response to the operation of clicking the screen, the electronic device enlarges and displays the face simulation makeup image of the user in a central enlarged manner, and the display manner of the face image of the user is unchanged.
In some examples, displaying the magnified image within the first display frame is synchronized with displaying the magnified image within the second display frame. For example, the dynamic effect of displaying the magnified image in the first display frame and the dynamic effect of displaying the magnified image in the second display frame are played synchronously.
In one implementation, each time the electronic device receives the first input, the display content of the current interface is displayed n times (n is a default value, such as 2,3,5, 10, etc.) magnification.
In another example, the electronic device enlarges the display content in a track-and-zoom manner. In this way, the tracking area can be always located in the display center. As shown in fig. 11A, the method includes:
s1101, the electronic device displays a first interface of the cosmetic application.
Specific steps of S1101 may refer to S1001, and will not be described herein.
S1102, the electronic device receives a second input.
Wherein the second input may be the same as the first input or different from the first input.
The second input may include: voice (e.g., voice "zoom in eyes", voice "zoom out mouth"), gesture operations by a user on a display screen (e.g., a double-finger flare operation), and the like.
S1103, acquiring a tracking area according to the second input.
In one example, the electronic device receives speech and determines the tracking area based on the semantics of the speech. For example, when a voice "zoom in on eye" is received, the tracking area is determined to be the display area on the display screen where the eyes of the user are located.
In another example, the electronic device receives a gesture operation of a user on the display screen, and determines the tracking area according to an operation position of the gesture of the user on the display screen. For example, when the double-finger expansion operation is received, and it is determined that the double finger leaves the display screen, the midpoint of the connection line of the contact point of the double finger on the display screen is a tracking area. The determined tracking area is different if the gesture of the user is different in the operation position on the display screen.
S1104, in response to receiving the second input, the electronic device enlarges and displays the display content of the first interface in a track-and-zoom manner.
In some embodiments, the tracking area is located within the first display frame. In response to receiving the second input, the electronic device displays the enlarged first display content 11 or the first object within the first display frame centered around the tracking area. It will be appreciated that if the tracking area determined from the second input is different, different portions of the simulated make-up image of the user's face are displayed within the first display frame. Illustratively, as shown in fig. 11B, a first screen 21 of the electronic device display screen displays a first display content 11 (user face image) and a second display content 12 (user face simulated make-up image). The electronic device receives a double-finger expansion operation in the first display frame (when the double fingers leave the display screen, the midpoint of the connecting line of the contact points of the display screen is the mouth of a user), and responds to the double-finger expansion operation, the electronic device enlarges and displays the facial image of the user in a tracking and amplifying mode. In the enlarged user face image, the user mouth is located at the center point of the first display frame. Optionally, the electronic device displays the enlarged second display content 12 or the second object within the second display frame centered at a corresponding location of the tracking area within the second display frame. Wherein the magnified image displayed in the second display frame corresponds to the magnified image displayed in the first display frame. In one implementation, the magnified image displayed within the second display frame is a simulated make-up image of the magnified image displayed within the first display frame. For example, the tracking area is a display area where eyes of the user are located, and the corresponding position of the tracking area in the second display frame is a display area where eyes of the simulated makeup-carrying image are located. Illustratively, as shown in fig. 11C, a first screen 21 of the electronic device display screen displays a first display content 11 (user face image) and a second display content 12 (user face simulated make-up image). The electronic equipment receives a double-finger expansion operation in the first display frame (when the double fingers leave the display screen, the midpoint of the connecting line of the contact points of the display screen is the mouth of a user), and responds to the double-finger expansion operation, the electronic equipment enlarges and displays the face image of the user and the simulated makeup-carrying image of the face of the user in a tracking and amplifying mode. In the enlarged user face image, the user mouth is positioned at the center point of the first display frame; in the enlarged user face simulation makeup image, the user mouth is located at the center point of the second display frame.
In some embodiments, the tracking area is located within the second display frame. In response to receiving the second input, the electronic device displays the enlarged second display content 12 or the second object within the second display frame centered around the tracking area. Optionally, the electronic device may display the enlarged first display content 11 or the first object in the first display frame with the corresponding position of the tracking area in the first display frame (for example, the tracking area is a display area where the eyes of the user are located in the simulated makeup image, and the corresponding position of the tracking area in the first display frame is a display area where the eyes of the user are located) as a center.
In some examples, displaying the magnified image within the first display frame is synchronized with displaying the magnified image within the second display frame. For example, the dynamic effect of displaying the magnified image in the first display frame and the dynamic effect of displaying the magnified image in the second display frame are played synchronously.
In one implementation, each time the electronic device receives the second input, the display content of the current interface is displayed n times (n is a default value, such as 2,3,5, 10, etc.) magnification.
In one implementation, in a dynamic user face image that is enlarged and displayed within a first display frame, a tracking area remains displayed at a center point of the first display frame; in the dynamic user face simulated makeup image enlarged and displayed in the second display frame, the tracking area remains displayed at the center point of the second display frame. Illustratively, the tracking area is the user's mouth. After the camera of the electronic device collects the facial images of the user, the image tracking engine locates the position of the mouth in each frame of image through an image recognition algorithm. The solidifying and amplifying control module is used for controlling the first display frame of the display screen to display the amplified facial image of the user taking the mouth position as the center point. Optionally, an enlarged user face simulated makeup image centered on the mouth position is displayed in a second display frame of the display screen. When the user moves in the visual range of the camera, the mouth area is always enlarged and displayed on the display screen.
In some embodiments, a dashed box may be superimposed on the magnified displayed image for indicating the cosmetic location and shape, guiding the user through the application of makeup. Illustratively, as shown in fig. 11D, a dashed box 110a is superimposed on the enlarged user's face simulated make-up image's mouth area displayed within the second display box. The dashed box 110a is used to indicate the position and shape of the make-up for the mouth. In some examples, the enlarged user facial simulated make-up image is displayed within a second display frame of the electronic device display screen. The electronic device receives an operation of clicking the simulation button by the user so that the simulation button is in a pressed state, and displays a dotted line frame at a corresponding position of the enlarged and displayed user face simulation makeup-carrying image according to the makeup site indicated by the makeup step guidance information 132. In other examples, the electronic device displays a user facial simulated make-up image that includes a dashed box thereon, the dashed box being enlarged as the user facial simulated make-up image is enlarged upon receipt of the first input or the second input.
In one example, the dashed box may move with the movement of the enlarged user face image such that the dashed box is always located at the indicated location.
The embodiment of the application provides a display method applied to electronic equipment, and a user can select different cosmetic parameters on a display interface of a cosmetic application so that a corresponding effect is presented on second display content 12 (a face with cosmetic image of the user). In this way, the user can make up against the effect of the makeup image on the user's face.
In some embodiments, the display interface of the cosmetic application includes a third display content 13, the third display content 13 for displaying cosmetic assistance information. In some examples, the cosmetic auxiliary information may include a make-up plate 131, the make-up plate 131 including a variety of cosmetic parameters. In one example, referring to fig. 12 (a), the vanity tray 131 includes a "recommend" option 1310, a "topical make-up" option 1320, an "overall make-up" option 1330, a "favorites" option 1340, a "custom" option 1350, and the like. The user can click on any one of the options to open the corresponding page.
Wherein, the "whole make-up" page comprises one or more whole make-up examples, and each whole make-up example comprises parameters of each local make-up corresponding to the whole make-up. The user may select one of the overall make-up examples to cause the second display content 12 to present a corresponding overall make-up effect. Illustratively, the user clicks on the "make-up-as-a-whole" option 1330 shown in fig. 12 (a). In response to the click operation, the electronic apparatus displays an "overall make-up" page 1331 shown in fig. 12 (b). Among these, the "make-up overall" page 1331 includes various make-up overall examples of "retro", "fashion", "fresh", "make-up bare", "peach blossom", "smoke", and the like. For example, the color of the foundation of the whole makeup with the "fresh" style is "#1", the color of the eye line is "black 01", the color of the eyebrow is "light gray", the color of the eye shadow is "nectarine", the color of the lip is "RD06", the color of the blush is "light powder", etc. The user clicks the "fresh" option to select the "fresh" style of the overall makeup, and accordingly, the simulated makeup of the second display content 12 (the makeup-carrying image of the user's face) appears as "fresh" style, with the foundation color "#1", the eyeliner color "black 01", the eyebrow color "light gray", the eye shadow color "nectarine", the lip color "RD06", the blush color "light powder", and so on.
Optionally, the user clicks on an icon of a partial makeup in the overall makeup example, and the electronic device displays corresponding makeup step guidance information 132. Illustratively, the user clicks on the "lip gloss" icon in the interface shown in fig. 12 (b), and the electronic device displays the interface shown in fig. 12 (c), which includes the make-up step guidance information "step 5/12: the RD06 color lipstick was chosen to fill the upper lip. "
The "topical make-up" page includes a plurality of topical make-up options, such as "foundation," "eyeliner," "eyebrow," "eyeshadow," "lip gloss," "blush," and the like. Wherein each of the topical make-up options includes one or more make-up parameters. One of the make-up parameters may be selected by the user to cause the second display content 12 to present a corresponding localized make-up effect. Illustratively, as shown in FIG. 13A, the user may click on the "topical make-up" option 1320 of the make-up pad. In response to the click operation, the electronic device displays a "partial make-up" page 1321; among other things, the "topical make-up" page 1321 includes various options such as "foundation," "eyeliner," "eyebrow," "eyeshadow," "lip gloss," "blush," and the like. For example, the user clicks the "lip gloss" option to select the make-up parameter "PK04"; accordingly, the mouth color of the simulated make-up of the second display content 12 (the user's face with make-up image) appears as the color indicated by PK 04.
Optionally, the electronic device supports the user to add customized makeup parameters on the "local makeup" page. Illustratively, as shown in FIG. 13B, the "topical make-up" page 1321 includes a "lip gloss" option, and the "lip gloss" page 1322 includes a plurality of lip gloss colors. The user may press the blank of the "lip gloss" page 1322 long (e.g., press the display screen for more than 3 seconds), and the electronic device receives an operation to press the blank of the "lip gloss" page 1322 long, and displays the add icon 1323. In response to receiving a user click on the add icon 1323, the electronic device displays a text box 1324 and an "open" button 1325 on a "lip gloss" page 1322. The user may enter a file location or path in text box 1324 and click on "open" button 1325. The electronic device receives an operation of clicking on the "open" button 1325, and in response to the operation of clicking on the "open" button 1325, displays a picture 1326 on the "lip gloss" page 1322. Illustratively, the picture 1326 includes a color and name of "hua red" lip gloss. The electronic device may receive a double-click operation of the user on the picture 1326, and in response to the double-click operation of the user on the picture 1326, the "lipbook" page 1322 is newly added with a "hua red" option 1327; the "red-in-water" option 1327 displays a color of "red-in-water" in the lip gloss in the picture 1326, and the "red-in-water" option 1327 has a name of "red-in-water" in the lip gloss in the picture 1326 (or is specified by the user).
The "recommended" page includes one or more global makeup examples. The one or more integral make-up examples are generated by the electronic device based on facial features of the user. For example, a matching eyebrow shape in the whole makeup example is generated according to the face shape of the user, and a corresponding foundation color in the whole makeup example is generated according to the skin color of the user. Illustratively, as shown in FIG. 14, the user may click on a "recommend" option 1310 of the vanity tray. In response to the click operation, the electronic device displays a "recommended" page 1311; the "recommended" page 1311 includes various overall makeup examples of "shopping", "dating", "occupation", "sport", and the like. Each of the plurality of global makeup examples is generated from facial features of a user. The user may select one of the overall make-up examples to cause the second display content 12 to present a corresponding overall make-up effect.
The "collect" page includes one or more overall make-up examples. The one or more integral make-up examples are saved on a "favorites" page upon selection by the user. Illustratively, as shown in fig. 15A (a), in the "make-up-as-a-whole" page 1331, the electronic device receives an operation of a user long press (e.g., press time greater than 3 seconds) of the "refresh" option; alternatively, as shown in fig. 15A (b), the second display content 12 presents the overall makeup effect corresponding to the "freshness" option of the "overall makeup" page 1331. The electronic equipment receives the operation of long-pressing the display screen by the user (the pressing area is positioned in the second display frame); the electronic device adds the whole makeup with the fresh style to the collection page. Illustratively, as shown in FIG. 15B, the user may click on the "favorites" option 1340 of the vanity tray. In response to the click operation, the electronic device displays a "favorite" page 1341; the "collect" page 1341 includes a variety of overall make-up examples of "collect 1", "collect 2", "collect 3", "collect 4", and "collect 5". For example, the "Collection 5" option is a "fresh" style of overall make-up saved by the method of FIG. 15A. The user may select one of the overall makeup examples in the "collect" page 1341 to cause the second display content 12 to present a corresponding overall makeup effect.
In some examples, the electronic device may generate customized overall makeup examples, or makeup parameters for the local makeup, from the pictures uploaded by the user. The user may select a customized overall makeup example to cause the second display content 12 to present a corresponding overall makeup effect; the customized make-up parameters may also be selected such that the second display 12 presents a corresponding localized make-up effect.
In one implementation, an electronic device receives a picture uploaded by a user, the picture including a face image with make-up. The electronic device (for example, the makeup control module extracted in fig. 4) extracts feature data (for example, shape features, such as facial form, eyebrow form, lip form, etc., and color features, such as foundation color, eye shadow color, eyebrow color, etc.) of the face image in the picture. The electronic device creates make-up parameters (such as eye shadow, eye line, lip gloss, eyebrow shape, blush, etc.) from the feature data (shape features, color features, etc.). Optionally, the makeup parameters of each part of the face can be packaged and stored as an integral makeup; optionally, the makeup parameters of each part of the face can be respectively stored as the makeup parameters of the local makeup.
Illustratively, as shown in FIG. 16A, the electronic device receives a user click on a "custom" option 1350. In response to the click operation, the electronic device displays a "custom" page 1351. The user may open a picture on a "custom" page 1351. Illustratively, the "custom" page 1351 includes a text box 1352. The user may enter the save location or path of the picture at text box 1352 and click on "open" button 1353. The electronic device receives an operation of clicking the "open" button 1353, and displays a "custom" page 1354 in response to the operation of clicking the "open" button 1353. The "custom" page 1354 includes one or more pictures. For example, the electronic device receives an operation of clicking a picture by a user, extracts feature data of a face image in the picture, and creates a makeup parameter according to the feature data. Illustratively, the electronic device displays a "custom" page 1355. The "custom" page 1355 includes make-up parameters that the electronic device generates from the picture. Optionally, the "custom" page 1355 includes a "save all" button 1356, a "save local" button 1357, and a "cancel" button 1358; the "save all" button 1356 is used to package and save the makeup parameters generated by the electronic device according to the pictures into an overall makeup, the "save local" button 1357 is used to save the makeup parameters generated by the electronic device according to the pictures into the makeup parameters of the local makeup, and the "cancel" button 1358 is used to not save the makeup parameters generated by the electronic device according to the pictures. In one example, as shown in fig. 16B, the electronic device receives a click operation of the "save all" button 1356 by the user, and in response to the click operation of the "save all" button 1356, the electronic device saves the makeup parameters (foundation color "#3", eye line color "black 01", eyebrow color "light brown", eye shadow color "golden brown", lip color "RD04", blush color "pink orange") included in the "custom" page 1355 as an integral makeup example. As shown in FIG. 16B, the "make-up-all" page 1331 is newly populated with "custom 1" make-up-all examples. In another example, as shown in fig. 16C, the electronic device receives a click operation of the "save local" button 1357 by the user, and in response to the click operation of the "save local" button 1357, the electronic device saves the makeup parameters selected by the user (eye shadow color is "golden brown" and blush color is "pink orange") in the "custom" page 1355 as the makeup parameters of the local makeup, respectively. As shown in fig. 16C, the blush color of the "topical make-up" page 1321 is newly added with the "whitewashed orange" option, and the eye shadow color of the "topical make-up" page 1321 is newly added with the "golden brown" option.
It will be appreciated that the picture may include a full face image with make-up, or may include only a partial face image, or may include only a portion of a face (e.g., eyebrows, mouth, nose, etc.).
In some examples, the electronic device may generate customized overall makeup examples, or makeup parameters for the local makeup, from the display content within the first display frame. Illustratively, displayed within the first display frame is a facial image of the user's cosmetic makeup. And when the electronic equipment receives an operation of extracting facial makeup parameters of a user (such as clicking the first display frame inner area of the display screen by a finger joint), a user-defined whole makeup example or the makeup parameters of local makeup are generated according to the facial image of the user-made-up.
In one implementation, a camera of an electronic device captures a user-made-up facial image and displays the user-made-up facial image within a first display frame. The electronic device receives an operation of extracting facial makeup parameters of the user, and the electronic device (for example, the makeup control module in fig. 4) extracts feature data (for example, shape features, such as facial form, eyebrow shape, lip shape, and the like, and color features, such as foundation color, eye shadow color, eyebrow color, and the like) of a face image of the user. The electronic device creates make-up parameters (such as eye shadow, eye line, lip gloss, eyebrow shape, blush, etc.) from the feature data (shape features, color features, etc.). Optionally, the makeup parameters of each part of the face can be packaged and stored as an integral makeup; optionally, the makeup parameters of each part of the face can be respectively stored as the makeup parameters of the local makeup.
Illustratively, as shown in fig. 16D, the first screen 21 of the electronic device display screen displays a user makeup-carrying face image and a user face-simulating makeup-carrying image. The electronic equipment receives the operation of clicking the first display frame inner area of the display screen by the finger joints of the user, responds to the operation of clicking the first display frame inner area of the display screen by the finger joints, extracts the characteristic data of the face image in the face image with makeup of the user, and creates the makeup parameters according to the characteristic data. The electronic device packages and saves the generated makeup parameters as an overall makeup, and the "overall makeup" page 1331 is newly added with a "custom 2" overall makeup example.
In some embodiments, the electronic device may receive a user modification to a make-up parameter, e.g., the user may modify a make-up effect for any portion of the user's facial simulated make-up image within the second display frame. For example, modify eyebrows, lips, eye shadow colors, etc. The electronic device stores the modified make-up parameters. In one implementation, the modified make-up parameters may be stored separately. In another implementation, the overall makeup after the makeup parameters are modified may be saved as an overall makeup example.
It may be appreciated that the electronic device may receive the modification of the makeup parameters by the user on the dynamically displayed second display content 12, may receive the modification of the makeup parameters by the user on the solidified displayed second display content 12, and may receive the modification of the makeup parameters by the user on the enlarged displayed second display content 12.
Illustratively, as shown in fig. 17, the electronic device receives a user modification operation (e.g., a one-finger long press drag operation, a one-click and drag operation, etc.) to the blush within a second display frame on the display screen, and in response to the modification operation to the blush, the user's face displayed within the second display frame simulates a change in the blush shape of the vanity image (the blush shape changes with the finger drag position). Optionally, the display interface within the second display frame includes a "save" icon 1701 and a "cancel" icon 1702; the "save" icon 1701 is used to save the makeup parameters of the modified user's face simulated makeup image and the "cancel" icon 1702 is used to not save the makeup parameters of the modified user's face simulated makeup image. The electronic device receives a click operation of the save icon 1701 by a user, and responds to the click operation of the save icon 1701 by the user, and saves the makeup parameters of the revised user face simulation makeup-carrying image. For example, the makeup parameters of the modified user face simulated makeup-carrying image are saved as an integral makeup example, and the user can view the integral makeup in the makeup pan.
In one implementation, a sensor of an electronic device detects a single-finger long-press drag operation of a user on a display screen, and a sensor service is notified through a sensor driver. The sensor service determines that a single-finger long-press drag operation of a user on a screen is received, generates a modification makeup event, and reports the modification makeup event (the modification makeup event includes a finger drag position) to the event distribution management service. The event distribution management service distributes the make-up modification event (the make-up modification event includes a finger drag position) to the make-up area layout display module. The makeup area layout display module generates a modified user face simulation makeup-carrying image according to the finger dragging position. Optionally, the sensor of the electronic device detects a click operation of the "save" icon 1701 by the user on the display screen, and the sensor service is notified through the sensor driver. The sensor service determines that a click operation of the save icon 1701 by the user on the display screen is received, generates a save makeup parameters event, and reports the save makeup parameters event to the event distribution management service. The event distribution management service distributes the event with the saved makeup parameters to the makeup extraction control module, and the makeup extraction control module analyzes the makeup carrying simulated image of the face of the user in the second display frame to obtain the makeup parameters; and saving the obtained makeup parameters.
Alternatively, in some embodiments, the electronic device detects that the ambient environment is dark (e.g., the sensor of the electronic device detects that the ambient light level is less than a preset value), and illustratively, as shown in fig. 18, the electronic device displays a light-compensating effect (e.g., highlighting a color ring or a color light) in a peripheral region of its display screen. Therefore, the effect of light supplementing when the user looks up the mirror can be helped, and the use experience of the user is further improved.
It will be appreciated that, in order to achieve the above-mentioned functions, the electronic device includes corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The embodiment of the application may divide the functional modules of the electronic device according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
In the case of an integrated unit, fig. 19 shows a possible structural schematic diagram of the electronic device involved in the above-described embodiment. The electronic device 2000 includes: a processing unit 2001, a display unit 2002, and a storage unit 2003.
The processing unit 2001 is used for controlling and managing the operation of the electronic device 2000. For example, the method may be used to perform the processing steps of image curing the dynamic display content of the display interface of the cosmetic application in the embodiments of the present application; a processing step of amplifying the display content of the makeup application display interface; extracting makeup parameters, controlling the layout of a makeup area, controlling the layout of a makeup disk, controlling the switching of makeup modes, controlling the light supplementing treatment of a makeup mirror and the like; and/or other processes for the techniques described herein.
A display unit 2002 for displaying an interface of the electronic device 2000. For example, it may be used to display a user face image in a first display frame and a user face simulated make-up image in a second display frame; cosmetic assistance information and the like are displayed.
A storage unit 2003 for storing program codes and data of the electronic device 2000.
Of course, the unit modules in the above-described electronic apparatus 2000 include, but are not limited to, the above-described processing unit 2001, display unit 2002, and storage unit 2003. For example, a detection unit or the like may also be included in the electronic device 2000. The detection unit may be used to detect actions, gestures, etc. of the user.
The processing unit 2001 may be a processor or controller, such as a central processing unit (central processing unit, CPU), a digital signal processor (digital signal processor, DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (field programmable gate array, FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The processor may include an application processor and a baseband processor. Which may implement or perform the various exemplary logic blocks, modules, and circuits described in connection with this disclosure. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, and the like. The display unit 2002 may be a display screen. The storage unit 2003 may be a memory. The detection unit may be a sensor, a touch device, a camera, etc.
For example, the processing unit 2001 is a processor (such as the processor 110 shown in fig. 3), the display unit 2002 is a display screen (such as the display screen 194 shown in fig. 3, the display screen 194 may be a touch screen in which a display panel and a touch panel may be integrated), the storage unit 2003 may be a memory (such as the internal memory 121 shown in fig. 3), and the detection unit may include a sensor (such as the sensor module 180 shown in fig. 3), and a camera (such as the camera 193 shown in fig. 3). The electronic device 2000 provided in the embodiment of the present application may be the electronic device 300 shown in fig. 3. Wherein the processor, the display screen, the memory, etc. may be coupled together, for example, by a bus connection.
Embodiments of the present application also provide a computer readable storage medium having stored therein computer program code which, when executed by the above-mentioned processor, causes the electronic device to perform the relevant method steps of the above-mentioned embodiments.
Embodiments of the present application also provide a computer program product which, when run on a computer, causes the computer to perform the relevant method steps of the embodiments described above.
The electronic device 2000, the computer readable storage medium, or the computer program product provided in the embodiments of the present application are configured to perform the corresponding methods provided above, and therefore, the advantages achieved by the method may refer to the advantages in the corresponding methods provided above, which are not described herein.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of functional modules is illustrated, and in practical application, the above-described functional allocation may be implemented by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to implement all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units described above may be implemented either in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (22)

1. A display method applied to an electronic device, comprising:
displaying a user face image acquired by a camera in a first area of a display screen of the electronic equipment;
a second area of the display screen of the electronic equipment displays a user face simulation makeup-carrying image generated according to the user face image;
receiving a first input; the first input is for selecting at least a portion of the user's face;
simultaneously amplifying the at least a portion corresponding to the user facial image and the at least a portion corresponding to the user facial simulated make-up image in response to the first input; the at least part corresponding to the facial image of the user is always enlarged and displayed in the center of the first area, and the at least part corresponding to the facial simulated makeup image of the user is always enlarged and displayed in the center of the second area.
2. The method of claim 1, wherein the receiving a first input comprises:
receiving a first input acting on the first region; or (b)
A first input is received that acts on the second region.
3. The method according to claim 1 or 2, wherein the user's face simulated make-up carrying image includes thereon first indication information indicating a make-up location and shape; the method further comprises the steps of:
the first indication information is enlarged as the user's face simulated makeup-carrying image is enlarged.
4. A method according to any one of the claims 1-3, characterized in that,
and if the object image which is blocked on the face of the user is overlapped on the face image of the user, the object image which is blocked on the face of the user on the face simulation makeup carrying image of the user is not displayed.
5. The method according to any one of claims 1 to 4, wherein,
the first area of the display screen of the electronic device displays a facial image of a user acquired by a camera, including:
a first area of the display screen of the electronic equipment displays a first static image acquired according to the facial image of the user acquired by the camera;
The second area of the electronic device display screen displaying a user face simulated make-up image generated from the user face image includes:
and a second area of the electronic device display screen displays a second static image formed by simulating makeup on the facial image of the user.
6. The method according to any one of claims 1-5, further comprising:
displaying the first information in a third area of the display screen of the electronic equipment;
receiving input operation of a user on the first information; and changing the facial simulated makeup carrying image of the user according to the first information.
7. The method of claim 6, wherein the first information is generated from features of a user's facial image.
8. The method of claim 6, wherein the method further comprises:
receiving the modification operation of a user on the facial simulation makeup carrying image of the user;
and generating the first information according to the modified user face simulated makeup carrying image.
9. An electronic device, the electronic device comprising:
a memory;
a processor invoking one or more computer programs stored in the memory, the one or more computer programs comprising instructions that, when executed by the processor, cause the electronic device to perform:
A first area of the display screen of the electronic equipment displays a user face image acquired by a camera;
a second area of the electronic equipment display screen displays a user face simulation makeup carrying image generated according to the user face image;
receiving a first input; the first input is for selecting at least a portion of the user's face;
simultaneously amplifying the at least a portion corresponding to the user facial image and the at least a portion corresponding to the user facial simulated make-up image in response to the first input; the at least part corresponding to the facial image of the user is always enlarged and displayed in the center of the first area, and the at least part corresponding to the facial simulated makeup image of the user is always enlarged and displayed in the center of the second area.
10. The electronic device of claim 9, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
receiving a first input acting on the first region; or (b)
A first input is received that acts on the second region.
11. The electronic device of claim 9 or 10, wherein the user's face simulated make-up carrying image includes first indication information thereon, the first indication information being used to indicate a make-up location and shape; the first indication information is enlarged as the user's face simulated makeup-carrying image is enlarged.
12. The electronic device of any one of claims 9-11, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
a first area of the display screen of the electronic equipment displays a first static image acquired according to the facial image of the user acquired by the camera;
and a second area of the electronic device display screen displays a second static image formed by simulating makeup on the facial image of the user.
13. The electronic device of any one of claims 9-12, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
displaying first information in a third area of the display screen of the electronic equipment;
receiving input operation of a user on the first information; and changing the facial simulated makeup carrying image of the user according to the first information.
14. The electronic device of claim 13, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
the first information is generated from features of the user's facial image.
15. The electronic device of claim 13, wherein the instructions, when executed by the processor, further cause the electronic device to perform:
Receiving the modification operation of a user on the facial simulation makeup carrying image of the user;
and generating the first information according to the modified user face simulated makeup carrying image.
16. A method of displaying a graphical user interface, comprising:
the electronic device displays a first graphical user interface GUI;
the first area of the first GUI comprises a user face image acquired by a camera;
the second area of the first GUI includes a user facial simulated makeup-carrying image generated from the user facial image;
responsive to receiving the first input, the electronic device concurrently displays a first region and a second region of the second GUI;
wherein the first input is for selecting at least a portion of the user's face; the center of the first area of the second GUI always comprises the at least partial enlarged image corresponding to the facial image of the user, and the center of the second area of the second GUI always comprises the at least partial enlarged image corresponding to the facial simulated makeup image of the user.
17. The method of claim 16, wherein the step of determining the position of the probe comprises,
the second area of the second GUI and the second area of the first GUI are the same display area on a display screen.
18. The method according to claim 16 or 17, wherein,
the second area of the first GUI further includes first indication information for indicating a cosmetic location and a shape;
the second area of the second GUI further includes first indication information displayed in enlargement as the user's face simulates a vanity image.
19. The method according to any one of claims 16-18, wherein,
if the first area of the first GUI comprises an object image which is blocked on the face of the user, the object image which is blocked on the face of the user in the second area of the first GUI does not display the simulated makeup carrying image.
20. The method according to any one of claims 16-19, wherein,
the first area of the first GUI comprises a first static image acquired according to a facial image of a user acquired by a camera;
the second region of the first GUI includes a second static image that simulates a make-up formation for the user facial image.
21. The method of any of claims 16-20, wherein the third region of the first GUI includes first information, the method further comprising:
receiving input operation of a user on the first information;
In response to an input operation of the first information by the user, the electronic device displays a third GUI; the second area of the third GUI includes a simulated makeup-carrying image of the user's face formed from the first information; the second area of the third GUI and the second area of the first GUI are the same display area on a display screen.
22. A computer readable storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any of claims 1-8.
CN202010880184.0A 2020-08-27 2020-08-27 Display method applied to electronic equipment and electronic equipment Active CN114115617B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010880184.0A CN114115617B (en) 2020-08-27 2020-08-27 Display method applied to electronic equipment and electronic equipment
PCT/CN2021/108283 WO2022042163A1 (en) 2020-08-27 2021-07-23 Display method applied to electronic device, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010880184.0A CN114115617B (en) 2020-08-27 2020-08-27 Display method applied to electronic equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN114115617A CN114115617A (en) 2022-03-01
CN114115617B true CN114115617B (en) 2024-04-12

Family

ID=80352610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010880184.0A Active CN114115617B (en) 2020-08-27 2020-08-27 Display method applied to electronic equipment and electronic equipment

Country Status (2)

Country Link
CN (1) CN114115617B (en)
WO (1) WO2022042163A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315165B (en) * 2023-11-28 2024-03-12 成都白泽智汇科技有限公司 Intelligent auxiliary cosmetic display method based on display interface

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000076398A1 (en) * 1999-06-14 2000-12-21 The Procter & Gamble Company Skin imaging and analysis systems and methods
CN109658167A (en) * 2017-10-10 2019-04-19 阿里巴巴集团控股有限公司 Try adornment mirror device and its control method, device
CN110045872A (en) * 2019-04-25 2019-07-23 廖其锋 Daily smart mirror and application method
CN111047384A (en) * 2018-10-15 2020-04-21 北京京东尚科信息技术有限公司 Information processing method of intelligent device and intelligent device
CN111553220A (en) * 2020-04-21 2020-08-18 海信集团有限公司 Intelligent device and data processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6971824B2 (en) * 2017-12-13 2021-11-24 キヤノン株式会社 Display control device and its control method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000076398A1 (en) * 1999-06-14 2000-12-21 The Procter & Gamble Company Skin imaging and analysis systems and methods
CN109658167A (en) * 2017-10-10 2019-04-19 阿里巴巴集团控股有限公司 Try adornment mirror device and its control method, device
CN111047384A (en) * 2018-10-15 2020-04-21 北京京东尚科信息技术有限公司 Information processing method of intelligent device and intelligent device
CN110045872A (en) * 2019-04-25 2019-07-23 廖其锋 Daily smart mirror and application method
CN111553220A (en) * 2020-04-21 2020-08-18 海信集团有限公司 Intelligent device and data processing method

Also Published As

Publication number Publication date
CN114115617A (en) 2022-03-01
WO2022042163A1 (en) 2022-03-03

Similar Documents

Publication Publication Date Title
CN111124561B (en) Display method applied to electronic equipment with folding screen and electronic equipment
CN112130742B (en) Full screen display method and device of mobile terminal
EP3800876B1 (en) Method for terminal to switch cameras, and terminal
CN111666119B (en) UI component display method and electronic device
CN113645351B (en) Application interface interaction method, electronic device and computer-readable storage medium
WO2020134869A1 (en) Electronic device operating method and electronic device
WO2020000448A1 (en) Flexible screen display method and terminal
CN111669459B (en) Keyboard display method, electronic device and computer readable storage medium
CN109274828B (en) Method for generating screenshot, control method and electronic equipment
CN114397981A (en) Application display method and electronic equipment
WO2020029306A1 (en) Image capture method and electronic device
CN110633043A (en) Split screen processing method and terminal equipment
CN113805487B (en) Control instruction generation method and device, terminal equipment and readable storage medium
CN115129196A (en) Application icon display method and terminal
CN114077365A (en) Split screen display method and electronic equipment
CN112449101A (en) Shooting method and electronic equipment
CN113973189B (en) Display content switching method, device, terminal and storage medium
CN115914461B (en) Position relation identification method and electronic equipment
CN114115617B (en) Display method applied to electronic equipment and electronic equipment
CN114756184A (en) Collaborative display method, terminal device and computer-readable storage medium
WO2022078116A1 (en) Brush effect picture generation method, image editing method and device, and storage medium
CN117009005A (en) Display method, automobile and electronic equipment
CN115291779A (en) Window control method and device
CN115291780A (en) Auxiliary input method, electronic equipment and system
CN114579900A (en) Cross-device page switching method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant