CN115914442A - Image display method and device - Google Patents

Image display method and device Download PDF

Info

Publication number
CN115914442A
CN115914442A CN202211337664.8A CN202211337664A CN115914442A CN 115914442 A CN115914442 A CN 115914442A CN 202211337664 A CN202211337664 A CN 202211337664A CN 115914442 A CN115914442 A CN 115914442A
Authority
CN
China
Prior art keywords
image
makeup
target
display
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211337664.8A
Other languages
Chinese (zh)
Inventor
周群海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211337664.8A priority Critical patent/CN115914442A/en
Publication of CN115914442A publication Critical patent/CN115914442A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image display method and an image display device, and belongs to the technical field of communication. The image display method includes: receiving a first input; in response to the first input, displaying a first image on a first display area of an electronic device and displaying a second image on a second display area of the electronic device; wherein the first image comprises a user image acquired by a target camera in real time, and the second image comprises at least one of the following images: a target reference makeup image, a target makeup tutorial image, a user pre-makeup image; the first display area and the second display area are respectively corresponding display areas of two opposite sub-screens formed when the folding screen of the electronic equipment is in a folding state; or the first display area and the second display area are two split-screen display areas of the electronic device in a split-screen state respectively.

Description

Image display method and device
Technical Field
The present application belongs to the field of communication technology, and in particular, relates to an image display method and apparatus.
Background
A user may need to apply makeup for various reasons during daily life, work, or study. However, in the makeup process, the makeup effect is judged by the user's own subjectivity, and sometimes the user is difficult to determine whether the makeup effect meets the user's own requirements and needs to repeatedly adjust.
Disclosure of Invention
The embodiment of the application aims to provide an image display method and an image display device, and the problems that in the prior art, the makeup effect is difficult to measure accurately and the makeup effect is difficult to adjust can be solved.
In a first aspect, an embodiment of the present application provides an image display method, where the method includes:
receiving a first input;
in response to the first input, displaying a first image in a first display area of an electronic device and displaying a second image in a second display area of the electronic device;
wherein the first image comprises a user image acquired by a target camera in real time, and the second image comprises at least one of the following images: a target reference makeup image, a target makeup tutorial image, a user pre-makeup image;
the first display area and the second display area are respectively corresponding display areas of two opposite sub-screens formed when the folding screen of the electronic equipment is in a folding state; or the first display area and the second display area are two split-screen display areas of the electronic device in a split-screen state respectively.
In a second aspect, an embodiment of the present application provides an image display apparatus, including:
the first receiving module is used for receiving a first input;
the first display module is used for responding to the first input, displaying a first image in a first display area of the electronic equipment and displaying a second image in a second display area of the electronic equipment;
wherein the first image comprises a user image acquired by a target camera in real time, and the second image comprises at least one of the following: a target reference makeup image, a target makeup tutorial image, a user pre-makeup image;
the first display area and the second display area are respectively corresponding display areas of two opposite sub-screens formed when the folding screen of the electronic equipment is in a folding state; or the first display area and the second display area are two split-screen display areas of the electronic device in a split-screen state respectively.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps in the image display method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps in the image display method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image display method according to the first aspect.
In a sixth aspect, the present application provides a computer program product, which is stored in a storage medium and executed by at least one frame processor to implement the image display method according to the first aspect.
In the embodiment of the application, the first display area is equivalent to a cosmetic mirror function, so that the user image can be presented in real time, the user can accurately position the makeup position and know the makeup effect in real time, and the makeup is more convenient. The makeup comparison graph (such as a reference makeup image or a before-makeup image of the user) or a makeup teaching course graph and the like displayed in the second display area can be used as a makeup reference of the user to assist the user in measuring the makeup effect and facilitate the user to determine the required makeup effect, so that the repeated adjustment times during makeup are reduced. In a word, the technical scheme that this application embodiment provided can assist the user and make up, reduces the makeup degree of difficulty for it is more convenient to make up.
Drawings
Fig. 1 is a schematic flowchart of an image display method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a first display area and a second display area provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a material bar provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a marking preset makeup area provided in an embodiment of the present application;
fig. 5 is an enlarged schematic view of an image according to a first mode provided in the embodiment of the present application;
fig. 6 is a schematic diagram illustrating an enlarged image according to the second mode provided in the embodiment of the present application;
fig. 7 is an enlarged schematic view of an image according to a third mode provided in the embodiment of the present application;
FIG. 8 is an exemplary flow diagram provided by an embodiment of the present application;
fig. 9 is a schematic block diagram of an image display apparatus provided in an embodiment of the present application;
FIG. 10 is a schematic block diagram of an electronic device provided by an embodiment of the application;
fig. 11 is a schematic hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in other sequences than those illustrated or otherwise described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense to distinguish one object from another, and not necessarily to limit the number of objects, e.g., the first object may be one or more. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings by using specific embodiments and application scenarios thereof.
Fig. 1 is a flowchart illustrating an image display method provided in an embodiment of the present application, where the image display method is applied to an electronic device, that is, steps in the image display method are executed by the electronic device.
Wherein, the image display method may include:
step 101: the electronic device receives a first input.
The first input described herein is used to trigger a make-up mode of the electronic device. In this makeup mode, the user can view not only his makeup in real time, but also reference makeup or makeup before makeup, etc., and particularly see the explanation of step 102.
The first input may include, but is not limited to, at least one of: touch input, voice input, gesture input over air, etc.
Step 102: the electronic device displays a first image in the first display area and a second image in the second display area in response to the first input.
And after receiving the first input, the electronic equipment responds to the first input and enters a makeup mode.
The first image described herein includes a user image captured in real time by a target camera of the electronic device. The first display area is equivalent to a cosmetic mirror function, and the user images displayed in real time through the first display area can be used for conveniently and accurately positioning the cosmetic position of the user and acquiring the cosmetic effect of the user in real time.
In order to view the first image displayed in the first display area, the user needs to face the first display area, that is, to face the screen corresponding to the first display area, and the target camera needs to be set toward the user in order to capture the user image, so that the image capturing direction of the target camera is the same as or similar to the orientation of the first display area. For some electronic devices, such as mobile phones and tablet computers, the front-facing camera is generally arranged on the same side of the screen, so that the target camera can be the front-facing camera, and thus when a user faces the screen, the front-facing camera can collect images of the user and display the images in the same-side screen, and the user can view the images of the user through the screen arranged face to face with the user. Of course, the application scenario is not limited to this, and any application scenario may be used in which the image capturing direction of the target camera is the same as or similar to the orientation of the first display area.
The second image described herein may include at least one of: a target reference makeup image, a target makeup tutorial image, a user pre-makeup image.
When the second image comprises the target reference makeup image, the user can take the target reference makeup image as a reference to make up, determine the makeup effect required by the user, reduce the repeated adjustment times during makeup and reduce the difficulty of makeup. The target reference makeup image may be a makeup image of the user himself, such as a makeup photo of the user himself stored before, or may be a makeup image of other users, such as some makeup images of a network.
When the second image comprises the target makeup tutorial image, the user can learn the makeup method through the target makeup tutorial image to improve the makeup effect, and can measure the makeup effect of the user according to the makeup guided by the makeup tutorial, so that the repeated adjustment times during makeup are reduced, and the makeup difficulty is reduced.
When the second image comprises the image before makeup, the user can compare the difference between the current makeup and the makeup before makeup, and determine whether the current makeup reaches the required makeup effect, so that the aim of assisting the user in measuring the makeup effect is fulfilled, the repeated adjustment times during makeup are reduced, and the makeup difficulty is reduced.
In the embodiment of the application, under electronic equipment's makeup mode, not only can provide the vanity mirror function for the user can pinpoint the makeup position and learn self makeup effect in real time, reduces the user to the dependence in makeup place, still can provide functions such as makeup contrast, makeup course for the user, helps the user to measure the makeup effect, and the number of times of adjusting is repeated when reducing the makeup, is favorable to shortening long when making up, improves the efficiency of making up, promotes the convenience of making up. When the electronic equipment is convenient equipment (such as a mobile phone) which is often carried by a user, the user can make up at any time and any place, and the convenience of making up is further improved.
As an alternative embodiment, the first display area and the second display area may be display areas corresponding to two opposite sub-screens formed when the folding screen of the electronic device is in the folding state. For example, as shown in fig. 2, when the foldable screen of the electronic device is in a folded state, two opposite sub-screens, namely, an inner screen a and an inner screen B, are formed, the first display area 201 may be a display area corresponding to the inner screen a, and the second display area 202 may be a display area corresponding to the inner screen B.
Based on the characteristic that the folding screen can form at least two sub-screens in the folded state, the embodiment of the application can select to enter a makeup mode under the condition that the folding screen is in the folded state, so that the first image and the second image are respectively displayed in the display areas corresponding to the two opposite sub-screens, and the user can conveniently view the images.
Optionally, in the foregoing case, step 101: receiving a first input may include: a first sub-input and a second sub-input are received.
Accordingly, step 102: displaying a first image in a first display area of the electronic device and a second image in a second display area of the electronic device in response to the first input may include:
in response to the first sub input and the second sub input, displaying a first image in the first display area and displaying a second image in the second display area in a case where the first sub input and the second sub input satisfy a preset condition.
Wherein the preset condition may include: and starting the target camera through the first sub-input, and adjusting the folding angle of the folding screen to be within a preset angle range through the second sub-input.
In the embodiment of the present application, the first input may include at least two input operations, such as a first sub-input and a second sub-input.
The first sub-input is used to start the target camera. For example, in a case where the target camera is turned on by default after the camera application is started, the first sub-input may be an input operation for starting the camera application. In a case where the default camera to be turned on after the camera application is started is not the target camera, the first sub-input may be a switching input operation of the camera to turn on the target camera. Of course, this is merely an example, and the first sub-input is not limited thereto.
The second sub-input is used for adjusting the folding angle of the folding screen to be within a preset angle range. For example, the folding angle is adjusted by manual control, voice control, gesture control, or the like. Optionally, the preset angle range may be set by default of the system, may also be set by the user, and may also be determined based on a specific folding angle set by the user, for example, the user may set a specific folding angle in the parameter setting in advance, such as 120 °, the electronic device may determine an angle range based on the specific folding angle set by the user, such as 120 ° ± 3 °, and the angle range is the preset angle range.
In the embodiment of the application, the electronic equipment enters the makeup mode when detecting that the target camera is started and the folding angle is within the preset angle range, and the starting condition of the makeup mode is not single but multidimensional, so that the occurrence of misoperation can be reduced.
As an optional embodiment, the first display area and the second display area may also be two split-screen display areas of the electronic device in a split-screen state, respectively. In this embodiment, the electronic device may support a split-screen display function, and the screen of the electronic device may be a foldable screen or a non-foldable screen.
As an alternative embodiment, in the case where the second image includes at least one of the target reference makeup cosmetic image and the target makeup course image, before the second image is displayed in the second display region, the image display method may further include:
step A1: the electronic device obtains a pre-makeup image of the user.
The pre-makeup image of the user may be a photograph that the user takes for himself after starting the target camera.
Before makeup here means before the present makeup.
Step A2: the electronic equipment acquires the user appearance characteristic information according to the image before makeup.
The electronic device can acquire user appearance feature information such as facial form, hair color, hair style and the like based on the pre-makeup image of the user.
Step A3: the electronic device determines a first target image in the target application, wherein the first target image is matched with the user appearance feature information.
The target application may be an application installed on the electronic device, and may be specifically set according to actual requirements, such as a small video application, a beautiful image application, and the like.
The first target image described herein may include at least one of: a target reference makeup image, a target makeup tutorial image. The target makeup tutorial image described herein is used to demonstrate a makeup tutorial that matches the user's appearance characteristics and may include, but is not limited to: picture-like images, video-like images, etc.
In the embodiment of the application, in target applications such as imitation makeup courses and makeup images, makeup example courses, makeup effect maps and the like more suitable for users can be intelligently searched, so that the users are assisted in making up, and the makeup efficiency and the makeup effect of the users are improved.
As an alternative embodiment, step A3: determining a first target image matching with user appearance feature information in a target application may include:
step A31: and the electronic equipment acquires a third image matched with the user appearance characteristic information in the target application.
The third image described herein includes at least one of: reference is made to a makeup image and a makeup course image.
In a case where the third image includes the reference makeup image, the number of the reference makeup images is at least one; similarly, in the case where the third image includes a makeup course image, the number of makeup courses is at least one.
Step A32: the electronic device displays the third image in the second display area.
After obtaining the third image, the electronic device may display the third image in the second display area for selection by the user.
Alternatively, when the second display region cannot simultaneously display all the third images, the partial image may be displayed and the partial image may be hidden. The user can display the hidden image by a sliding manner, such as a left-right sliding manner or an up-down sliding manner.
Optionally, the third image may be specifically displayed in the target area of the second display area. As shown in fig. 3, the second display area 202 includes a material bar 2021, and the position of the material bar 2021 is the target area. The third image 2022 can be displayed in the material bar 2021. The material field 2021 may also display images before makeup by the user for selection by the user.
Step A33: the electronic device receives a second input for the third image, and determines the first target image in response to the second input.
The electronic device may determine, in a case where a second input to the third image is received, an image selected by the second input in the third image as the first target image in response to the second input.
The user may select a reference makeup or makeup tutor image desired by the user among the displayed third images through the second input, and the electronic device may determine the image selected in the third image by the second input as the first target image in response to the second input in a case where the second input to the third image by the user is received.
The second input described herein may include, but is not limited to: single-click touch operation, double-click touch operation, and the like.
In the embodiment of the application, when the reference makeup drawing or the makeup tutorial more suitable for the user is obtained based on the user appearance characteristic information, if a plurality of reference makeup drawings or makeup tutorials are obtained, the reference makeup drawings or the makeup tutorials are displayed so that the user can select the reference makeup drawing or the makeup tutorials as required.
Alternatively, in the case where the third image is displayed in the target area of the second display area, if the user completes the selection of the first target image, the third image of the target area may be continuously displayed or may be hidden. After being hidden, the hidden image can be displayed again through preset operation.
As an alternative embodiment, after step 102, the image display method may further include: and marking the preset makeup area.
The preset makeup area is a conventional makeup area, and may include, but is not limited to, at least one of the following: eyes, nose, mouth, eyebrows, cheeks, forehead, etc.
The identification method described herein may be identified by means of a dashed box. For example, as shown in fig. 4, preset makeup areas such as eyes, eyebrows, a nose, a mouth, and the like are identified by dashed boxes.
In the embodiment of the application, the user can be reminded to make up for the conventional makeup area by identifying the conventional makeup area, so that omission is avoided.
Alternatively, the identification of the preset makeup area may be performed automatically, for example, after the first image is displayed, the electronic device automatically identifies the first image, determines the preset makeup area, and identifies the preset makeup area. Certainly, the identifier of the preset makeup area may also be manually triggered by the user, for example, after the electronic device receives the long press touch operation on the first image, the electronic device identifies the first image, determines the preset makeup area, and identifies the preset makeup area.
Alternatively, the user may control the hiding of the logo in a manual operation manner.
As an alternative example, in the embodiment of the present application, the partial image area in the first image may be displayed in an enlarged manner so that the user can make up more finely and compare his/her own makeup with the partial makeup in the reference makeup.
The embodiment of the present application provides three implementation manners for enlarging a local image area in a first image, as described below.
In a first mode
After step 102, the image display method may further include: and under the condition that the electronic equipment detects that the second target image is included in the first image, determining a target makeup area of the user according to the second target image, and enlarging and displaying the target makeup area.
The second target image described herein includes at least one of: a makeup motion image and a makeup product image.
In the embodiment of the application, the electronic equipment can determine the current makeup area of the user based on the collected image information such as makeup actions and cosmetics, and then displays the current makeup area in an enlarged mode, so that makeup of the user is assisted, the user can make up more finely, and the auxiliary makeup function is more intelligent.
For example, after the electronic device detects a cosmetic action of a user on an eye (e.g., an eye, an eyebrow, etc.), or detects a cosmetic product for applying make-up to the eye, such as a mascara brush, an eyebrow pencil, etc., the electronic device may enlarge and display the eye region, as shown in fig. 5.
Mode two
After step 102, the image display method may further include: the electronic equipment displays a preset grid mark on the first image; and receiving a third input to the target grid cell, and in response to the third input, magnifying and displaying the image at the target grid location.
The preset grid mark comprises at least two grid units, and the target grid unit is one of the at least two grid units.
In the embodiment of the application, the preset grid mark can be displayed on the first image, the user can select the grid unit in the preset grid mark according to the requirement, and then the electronic device can enlarge and display the image at the position of the selected grid unit. As shown in fig. 6, assuming that the user selects the grid cell 203 in the preset grid identification, the image at the position of the grid cell 203 is displayed in an enlarged manner.
Optionally, the display of the preset grid identifier may be performed automatically by the electronic device, or may be triggered manually by the user.
Mode III
After step 102, the image display method may further include: the electronic equipment receives a sliding operation of a user on the first image, responds to the sliding operation, determines a target image area circled by a sliding track of the sliding operation, and enlarges and displays an image in the target image area.
In the embodiment of the application, a user can manually circle the makeup area needing to be displayed in an enlarged mode. As shown in fig. 7, the user circles the eye makeup area desired to be enlarged by performing a sliding operation in the first display area, and the electronic device determines an image area circled by the user according to a sliding trajectory 204 of the sliding operation and then enlarges and displays an image in the image area.
Optionally, in the foregoing three manners, an image that needs to be displayed in a large size may be first captured in the first image, and then the captured image is displayed in an enlarged size, and the processing may be performed in real time, that is, the content of the image displayed in an enlarged size changes with the change of the first image.
As an alternative embodiment, after step 102, the image display method may further include:
receiving a fifth input; in response to the fifth input, a similarity between the user makeup in the first image and the reference makeup in the target reference makeup image is determined, and similarity information corresponding to the similarity is displayed.
In the embodiment of the application, the user can enable the electronic equipment to calculate the similarity between the makeup of the user in the first image and the reference makeup in the target reference makeup image through the fifth input, and display corresponding similarity information for the user to check, so that the user can conveniently determine whether to continue to make up.
The fifth input described herein may include, but is not limited to, at least one of: touch input, voice input, gesture input over air, etc.
As an alternative example, the user may take a picture of his/her own makeup after determining that makeup is completed, as a reference makeup figure at the time of makeup or makeup later.
Finally, the technical solutions provided in the embodiments of the present application are collectively illustrated by an example.
As shown in fig. 8, the example flow may include:
step 801: the electronic device enters a self-timer camera mode.
The camera (hereinafter referred to as a front camera) started in the front-facing self-timer mode is the target camera described above.
Step 802: the electronic equipment receives a photographing instruction input by a user and obtains a first self-photographing image of the user.
The first self-timer image is the image before makeup by the user, that is, the self-timer image before makeup is performed this time.
Step 803: the electronic equipment enters a makeup mode when detecting that the folding angle of the folding screen is within a preset angle range.
Wherein the folding screen forms a first sub-screen and a second sub-screen opposite to each other in the folded state.
Step 804: the electronic device intelligently recommends a reference makeup drawing and a makeup tutorial according to the first self-portrait image.
Step 805: under the condition that the user selects the reference makeup picture, the electronic equipment controls the first sub-screen to display the image collected by the front camera in real time, and the second sub-screen displays the reference makeup picture.
The image displayed on the first sub-screen corresponds to the first image, and the reference makeup drawing displayed on the second sub-screen corresponds to the target reference makeup image.
Step 806: under the condition that a user selects a makeup course, the electronic equipment controls the first sub-screen to display the image collected by the front camera in real time, and the second sub-screen displays the makeup course.
The makeup tutorials displayed on the second sub-screen correspond to the target makeup tutorial images.
It should be noted that, the step 805 and the step 808 may be switched, that is, in the makeup mode, the reference makeup image displayed on the second sub-screen may be switched to the makeup tutorial image, or the makeup tutorial image displayed on the second sub-screen may be switched to the reference makeup image.
Step 807: through a preset mode, the electronic equipment enlarges and displays a local image area of the first image.
The preset manner described herein may be the first manner, the second manner, or the third manner described above.
Step 808: after detecting that the user finishes making up, the electronic equipment compares the current makeup of the user with the reference makeup and displays the similarity information between the current makeup of the user and the reference makeup of the user.
Wherein, the user can input the instruction of completing the makeup to the electronic equipment through voice control, gesture control and the like.
Wherein the reference makeup described herein corresponds to the reference makeup in the target reference makeup image described above.
Step 809: and the electronic equipment receives a photographing instruction input by the user and obtains a second self-photographing image of the user.
The second self-portrait image is the self-portrait image after the user finishes the makeup.
The above is a description of the image display method provided in the embodiments of the present application.
To sum up, in the embodiment of the application, the first display area is equivalent to the function of a cosmetic mirror, so that the user image can be presented in real time, the user can accurately position the makeup position and know the makeup effect in real time, and the makeup is more convenient. The makeup comparison graph (such as a reference makeup image or a user pre-makeup image) or a makeup teaching course graph and the like displayed in the second display area can be used as a makeup reference of the user to assist the user in measuring the makeup effect, determine the required makeup effect, reduce the repeated adjustment times during makeup, and be beneficial to shortening the makeup time and improving the makeup efficiency. In a word, the technical scheme that this application embodiment provided can assist the user and make up, reduces the makeup degree of difficulty for it is more convenient to make up.
According to the image display method provided by the embodiment of the application, the execution main body can be an image display device. The embodiment of the present application describes an image display device provided in the embodiment of the present application, by taking an example in which the image display device executes an image display method.
Fig. 9 is a schematic block diagram of an image display device provided in an embodiment of the present application, which is applied to an electronic apparatus.
As shown in fig. 9, the image display apparatus may include:
a first receiving module 901, configured to receive a first input.
A first display module 902 is configured to display a first image in a first display area of an electronic device and a second image in a second display area of the electronic device in response to the first input.
Wherein the first image comprises a user image acquired by a target camera in real time, and the second image comprises at least one of the following images: a target reference makeup image, a target makeup tutorial image, a user pre-makeup image.
The first display area and the second display area are respectively corresponding display areas of two opposite sub-screens formed when the folding screen of the electronic equipment is in a folding state; or the first display area and the second display area are two split-screen display areas of the electronic device in a split-screen state respectively.
Optionally, in a case that the first display area and the second display area are respectively display areas corresponding to two sub-screens formed when the foldable screen is in a folded state, the first receiving module 901 may include:
a receiving unit for receiving the first sub-input and the second sub-input.
The first display module 902 may include:
a first display unit, configured to display the first image in the first display area and the second image in the second display area in response to the first sub-input and the second sub-input, in a case where the first sub-input and the second sub-input satisfy a preset condition.
Wherein the preset conditions include: and starting the target camera through the first sub-input, and adjusting the folding angle of the folding screen to be within a preset angle range through the second sub-input.
Optionally, in a case where the second image includes at least one of the target reference makeup image and the target makeup course image, the apparatus may further include:
the first acquisition module is used for acquiring images before makeup of the user.
And the second acquisition module is used for acquiring the appearance characteristic information of the user according to the pre-makeup image of the user.
And the first determining module is used for acquiring a first target image matched with the user appearance characteristic information in a target application.
Wherein the first target image comprises at least one of: the target reference makeup image, the target makeup course image.
Optionally, the first determining module may include:
and the acquisition unit is used for acquiring a third image matched with the user appearance feature information in the target application.
Wherein the third image comprises at least one of: reference is made to a makeup image and a makeup tutorial image.
A second display unit for displaying the third image in the second display area.
A determination unit for receiving a second input to the third image, and determining the first target image in response to the second input.
Optionally, the apparatus may further include:
and the second determining module is used for determining the target makeup area of the user according to the second target image under the condition that the first image comprises the second target image.
Wherein the second target image comprises at least one of: a makeup motion image and a makeup product image.
And the second display module is used for displaying the target makeup area in an enlarged mode.
Optionally, the apparatus may further include:
and the third display module is used for displaying a preset grid mark on the first image.
The preset grid mark comprises at least two grid units.
And the fourth display module is used for receiving a third input to the target grid unit and responding to the third input to enlarge and display the image at the target grid position.
Wherein the target grid cell is one of the at least two grid cells.
Optionally, the apparatus may further include:
and the second receiving module is used for receiving a fourth input.
And the processing module is used for responding to a fourth input, determining the similarity between the user makeup in the first image and the reference makeup in the target reference makeup image and displaying the similarity information corresponding to the similarity.
To sum up, in the embodiment of the application, first display area is equivalent to the vanity mirror function, can present user's image in real time, and the user of being convenient for pinpoints the makeup position and learns the makeup effect in real time for it is more convenient to make up. The makeup comparison graph (such as a reference makeup image or a user pre-makeup image) or a makeup teaching course graph and the like displayed in the second display area can be used as a makeup reference of the user to assist the user in measuring the makeup effect, determine the required makeup effect, reduce the repeated adjustment times during makeup, and be beneficial to shortening the makeup time and improving the makeup efficiency. In a word, the technical scheme that this application embodiment provided can assist the user and make up, reduces the makeup degree of difficulty for it is more convenient to make up.
The image display device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The image display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The image display device provided in the embodiment of the present application can implement each process implemented in the embodiment of the image display method shown in fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 10, an embodiment of the present application further provides an electronic device 1000, including: the processor 1001 and the memory 1002, and the memory 1002 stores a program or an instruction that can be executed on the processor 1001, and when the program or the instruction is executed by the processor 1001, the program or the instruction realizes each step of the above-mentioned embodiment of the image display method, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be noted that the electronic device 1000 in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, and processor 1110.
Those skilled in the art will appreciate that the electronic device 1100 may further include a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 1110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Processor 1110 may be configured to, among other things: in a case where the user input unit 1107 receives a first input, the display unit 1106 is controlled to display a first image in a first display area and a second image in a second display area in response to the first input. Wherein the first image comprises a user image acquired by a target camera in real time, and the second image comprises at least one of the following: a target reference makeup image, a target makeup tutorial image, a user pre-makeup image. The first display area and the second display area are respectively corresponding display areas of two opposite sub-screens formed when the folding screen of the electronic equipment is in a folding state; or the first display area and the second display area are two split-screen display areas of the electronic device in a split-screen state respectively.
Optionally, processor 1110 may also be configured to: in a case where the user input unit 1107 receives a first sub input and a second sub input, in response to the first sub input and the second sub input satisfying a preset condition, the first image is displayed in the first display area and the second image is displayed in the second display area through the display unit 1106.
Wherein the preset conditions include: and starting the target camera through the first sub-input, and adjusting the folding angle of the folding screen to be within a preset angle range through the second sub-input.
Optionally, processor 1110 may also be configured to: acquiring the pre-makeup image of the user; acquiring user appearance feature information according to the user pre-makeup image; acquiring a first target image matched with the user appearance feature information in a target application; wherein the first target image comprises at least one of: the target reference makeup image, the target makeup tutorial image.
Optionally, the processor 1110 may be further configured to: in the target application, acquiring a third image matched with the user appearance feature information; and displays the third image in the second display region through the display unit 1106; and in a case where the user input unit 1107 receives a second input to the third image, an image selected by the second input in the third image is determined as the first target image in response to the second input. Wherein the first image comprises at least one of: reference is made to a makeup image and a makeup course image.
Optionally, processor 1110 may also be configured to: determining a target makeup area of a user according to a second target image when the first image includes the second target image; and displays the target makeup area in enlargement through the display unit 1106. Wherein the second target image comprises at least one of: a makeup motion image and a makeup product image.
Optionally, the processor 1110 may be further configured to: displaying a preset grid identifier on the first image through a display unit 1106; and in a case where the user input unit 1107 receives a third input to the target grid cell, the image at the target grid position is displayed enlarged by the display unit 1106. The preset grid mark comprises at least two grid units, and the target grid unit is one of the at least two grid units.
Optionally, processor 1110 may also be configured to: in a case where the user input unit 1107 receives a fourth input, in response to the fourth input, the similarity between the user makeup in the first image and the reference makeup in the target reference makeup image is determined, and similarity information corresponding to the similarity is displayed through the display unit 1106.
In the embodiment of the invention, the first display area is equivalent to a cosmetic mirror function, and can present the user image in real time, so that the user can accurately position the makeup position and know the makeup effect in real time, and the makeup is more convenient. The makeup comparison graph (such as a reference makeup image or a user pre-makeup image) or a makeup teaching course graph and the like displayed in the second display area can be used as a makeup reference of the user to assist the user in measuring the makeup effect, determine the required makeup effect, reduce the repeated adjustment times during makeup, and be beneficial to shortening the makeup time and improving the makeup efficiency. In a word, the technical scheme that this application embodiment provided can assist the user and make up, reduces the makeup degree of difficulty for it is more convenient to make up.
It should be understood that in the embodiment of the present application, the input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics Processing Unit 11041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1106 may include a display panel 11061, and the display panel 11061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1107 includes at least one of a touch panel 11071 and other input devices 11072. A touch panel 11071, also called a touch screen. The touch panel 11071 may include two portions of a touch detection device and a touch controller. Other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, memory 1109 may include volatile memory or nonvolatile memory, or memory 1109 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1109 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1110 may include one or more processing units; optionally, the processor 1110 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the image display method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned embodiment of the image display method, and can achieve the same technical effect, and is not described here again to avoid repetition.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing embodiment of the image display method, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM, RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image display method, characterized in that the method comprises:
receiving a first input;
in response to the first input, displaying a first image on a first display area of an electronic device and displaying a second image on a second display area of the electronic device;
wherein the first image comprises a user image acquired by a target camera in real time, and the second image comprises at least one of the following images: a target reference makeup image, a target makeup tutorial image, a user pre-makeup image;
the first display area and the second display area are respectively corresponding display areas of two opposite sub-screens formed when the folding screen of the electronic equipment is in a folding state; or the first display area and the second display area are two split-screen display areas of the electronic device in a split-screen state respectively.
2. The image display method according to claim 1, wherein the receiving of the first input in a case where the first display region and the second display region are display regions respectively corresponding to two opposite sub-screens formed when the folding screen of the electronic device is in a folded state includes:
receiving a first sub-input and a second sub-input;
the displaying a first image in a first display area of an electronic device and a second image in a second display area of the electronic device in response to the first input, comprising:
in response to the first sub input and the second sub input, displaying the first image in the first display area and the second image in the second display area if the first sub input and the second sub input satisfy a preset condition;
wherein the preset conditions include: and starting the target camera through the first sub-input, and adjusting the folding angle of the folding screen to be within a preset angle range through the second sub-input.
3. The image display method according to claim 1, wherein in a case where the second image includes at least one of the target reference makeup image and the target makeup course image, before the second image is displayed in the second display region, the method further comprises:
acquiring a pre-makeup image of a user;
acquiring user appearance feature information according to the user pre-makeup image;
acquiring a first target image matched with the user appearance feature information in a target application; wherein the first target image comprises at least one of: the target reference makeup image, the target makeup tutorial image.
4. The image display method according to claim 3, wherein the acquiring, in the target application, the first target image that matches the user appearance feature information includes:
in the target application, acquiring a third image matched with the user appearance feature information; wherein the third image comprises at least one of: reference makeup images, makeup tutorial images;
displaying the third image in the second display area;
receiving a second input to the third image, determining the first target image in response to the second input.
5. The image display method according to claim 1, wherein after the first image is displayed in the first display region, the method further comprises:
determining a target makeup area of a user according to a second target image when the first image includes the second target image; wherein the second target image comprises at least one of: a makeup motion image, a makeup article image;
and magnifying and displaying the target makeup area.
6. The image display method according to claim 1, wherein after the first image is displayed in the first display region, the method further comprises:
displaying a preset grid mark on the first image; the preset grid mark comprises at least two grid units;
receiving a third input to a target grid cell, and in response to the third input, magnifying and displaying an image at the target grid location; wherein the target grid cell is one of the at least two grid cells.
7. An image display apparatus, characterized in that the apparatus comprises:
the first receiving module is used for receiving a first input;
the first display module is used for responding to the first input, displaying a first image in a first display area of the electronic equipment and displaying a second image in a second display area of the electronic equipment;
wherein the first image comprises a user image acquired by a target camera in real time, and the second image comprises at least one of the following images: a target reference makeup image, a target makeup tutorial image, a user pre-makeup image;
the first display area and the second display area are respectively corresponding display areas of two opposite sub-screens formed when the folding screen of the electronic equipment is in a folding state; or the first display area and the second display area are two split-screen display areas of the electronic device in a split-screen state respectively.
8. The image display device according to claim 7, wherein the first receiving module includes, when the first display region and the second display region are display regions corresponding to two sub-screens formed when the folding screen is in a folded state, respectively:
a receiving unit for receiving a first sub-input and a second sub-input;
the first display module includes:
a first display unit for displaying the first image in the first display area and the second image in the second display area in response to the first sub input and the second sub input in a case where the first sub input and the second sub input satisfy a preset condition;
wherein the preset conditions include: and starting the target camera through the first sub-input, and adjusting the folding angle of the folding screen to be within a preset angle range through the second sub-input.
9. The image display apparatus according to claim 7, wherein in a case where the second image includes at least one of the target reference makeup image and the target makeup course image, the apparatus further comprises:
the first acquisition module is used for acquiring a pre-makeup image of a user;
the second acquisition module is used for acquiring the appearance characteristic information of the user according to the pre-makeup image of the user;
the first determining module is used for acquiring a first target image matched with the user appearance feature information in a target application; wherein the first target image comprises at least one of: the target reference makeup image, the target makeup course image.
10. The image display device according to claim 9, wherein the first determination module comprises:
an obtaining unit, configured to obtain, in the target application, a third image that matches the user appearance feature information; wherein the third image comprises at least one of: reference makeup images, makeup tutorial images;
a second display unit configured to display the third image in the second display area;
a determining unit for receiving a second input to the third image, and determining the first target image in response to the second input.
11. The image display device according to claim 7, characterized in that the device further comprises:
the second determination module is used for determining a target makeup area of the user according to a second target image under the condition that the first image comprises the second target image; wherein the second target image comprises at least one of: a makeup motion image, a makeup article image;
and the second display module is used for displaying the target makeup area in an enlarged mode.
12. The image display device according to claim 7, characterized in that the device further comprises:
the third display module is used for displaying a preset grid mark on the first image; the preset grid mark comprises at least two grid units;
a fourth display module for receiving a third input to a target grid cell, and in response to the third input, magnifying and displaying an image at the target grid location; wherein the target grid cell is one of the at least two grid cells.
CN202211337664.8A 2022-10-28 2022-10-28 Image display method and device Pending CN115914442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211337664.8A CN115914442A (en) 2022-10-28 2022-10-28 Image display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211337664.8A CN115914442A (en) 2022-10-28 2022-10-28 Image display method and device

Publications (1)

Publication Number Publication Date
CN115914442A true CN115914442A (en) 2023-04-04

Family

ID=86485194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211337664.8A Pending CN115914442A (en) 2022-10-28 2022-10-28 Image display method and device

Country Status (1)

Country Link
CN (1) CN115914442A (en)

Similar Documents

Publication Publication Date Title
WO2021169307A1 (en) Makeup try-on processing method and apparatus for face image, computer device, and storage medium
US20120257000A1 (en) System and method for a grooming mirror in a portable electronic device with a user-facing camera
WO2015029372A1 (en) Makeup assistance device, makeup assistance system, makeup assistance method, and makeup assistance program
WO2022188763A1 (en) Imaging method and apparatus, and electronic device and storage medium
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
CN111045577A (en) Horizontal and vertical screen switching method, wearable device and device with storage function
US20220414958A1 (en) Digital makeup artist
Iwabuchi et al. Smart makeup mirror: Computer-augmented mirror to aid makeup application
US11961169B2 (en) Digital makeup artist
CN113840070A (en) Shooting method, shooting device, electronic equipment and medium
WO2023284632A1 (en) Image display method and apparatus, and electronic device
TW202036280A (en) Make-up assisting method implemented by make-up assisting device
CN111953902B (en) Image processing method and device
CN112083863A (en) Image processing method and device, electronic equipment and readable storage medium
CN115914442A (en) Image display method and device
CN114785957A (en) Shooting method and device thereof
CN111953907B (en) Composition method and device
CN114285922A (en) Screenshot method, screenshot device, electronic equipment and media
CN114063845A (en) Display method, display device and electronic equipment
CN114025237A (en) Video generation method and device and electronic equipment
CN112367468B (en) Image processing method and device and electronic equipment
CN113455807B (en) Intelligent device
CN115589457A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN115589458A (en) Shooting method, shooting device, electronic equipment and storage medium
CN114554097A (en) Display method, display device, electronic apparatus, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination