CN111047384A - Information processing method of intelligent device and intelligent device - Google Patents

Information processing method of intelligent device and intelligent device Download PDF

Info

Publication number
CN111047384A
CN111047384A CN201811201107.7A CN201811201107A CN111047384A CN 111047384 A CN111047384 A CN 111047384A CN 201811201107 A CN201811201107 A CN 201811201107A CN 111047384 A CN111047384 A CN 111047384A
Authority
CN
China
Prior art keywords
user
cosmetic
information
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811201107.7A
Other languages
Chinese (zh)
Inventor
孙宇新
汤跃忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
iFlytek Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201811201107.7A priority Critical patent/CN111047384A/en
Publication of CN111047384A publication Critical patent/CN111047384A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement

Landscapes

  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides an information processing method of an intelligent device, the intelligent device includes a display device, an image acquisition device and a voice acquisition device, the method includes: acquiring a first user voice instruction through the voice acquisition device; acquiring a user image through the image acquisition device under the condition that the first user voice instruction meets a preset condition; processing the user image based on an adjustment parameter corresponding to at least one cosmetic; and displaying the processed user image through the display device.

Description

Information processing method of intelligent device and intelligent device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an information processing method for an intelligent device and an intelligent device.
Background
With the continuous improvement of living standard, the ability of consumers to purchase cosmetics is also continuously improved. When a consumer purchases cosmetics, it is often necessary to try out a plurality of cosmetics in order to purchase cosmetics suitable for the user. However, if the consumer actually applies the cosmetic to try the makeup, it is inconvenient to repeatedly remove the makeup. Therefore, it is desirable to provide a virtual makeup method with convenient operation to solve the above problems.
In the process of implementing the disclosed concept, the inventor finds that at least the following problems exist in the prior art, namely, the device for virtual makeup trial in the prior art is generally fixed in position or complex in operation, and the user experience is poor.
Disclosure of Invention
In view of the above, the present disclosure provides an optimized information processing method for an intelligent device and an intelligent device.
One aspect of the present disclosure provides an information processing method of an intelligent device, the intelligent device including a display apparatus, an image acquisition apparatus, and a voice acquisition apparatus, the method including: acquiring a first user voice instruction through the voice acquisition device, acquiring a user image through the image acquisition device under the condition that the first user voice instruction meets a preset condition, and processing the user image based on an adjustment parameter corresponding to at least one cosmetic; and displaying the processed user image through the display device.
According to an embodiment of the present disclosure, the method further includes: and sharing the processed user image with a specific user in real time.
According to an embodiment of the present disclosure, the method further includes: obtaining a second user voice instruction, and determining the at least one cosmetic based on the second user voice instruction.
According to an embodiment of the present disclosure, the method further includes: obtaining current scene information, wherein the current scene information comprises environment information where a user is located currently, and determining the at least one cosmetic based on the current scene information.
According to an embodiment of the present disclosure, the method further includes: analyzing the user image, determining a user characteristic of the user, and determining the at least one cosmetic product based on the user characteristic.
According to an embodiment of the present disclosure, the method further includes: the method includes the steps of obtaining browsing information of a user, wherein the browsing information comprises information related to cosmetics historically browsed or being browsed by the user, and determining at least one cosmetic based on the browsing information.
According to an embodiment of the present disclosure, the method further includes: obtaining recommendation information, wherein the recommendation information comprises cosmetic information recommended by more than a preset threshold value and/or cosmetic information recommended by a specific user, and determining at least one cosmetic based on the recommendation information.
According to an embodiment of the present disclosure, the processing the user image based on the adjustment parameter corresponding to the at least one cosmetic product includes: the method comprises the steps of positioning key parts of a user, wherein the key parts comprise at least one of eyebrows, eyes, a nose bridge, a nose tip, cheekbones, a mouth corner, a middle of a person, cheeks and a chin, determining the key parts of the user corresponding to at least one cosmetic, and adjusting the corresponding key parts of the user based on an adjusting parameter corresponding to the at least one cosmetic.
According to an embodiment of the present disclosure, the method further includes: receiving feedback information of the specific user, and re-determining the at least one cosmetic based on the feedback information of the user.
Another aspect of the disclosure provides a smart device comprising a microphone, a camera, a processor, and a display screen. The microphone is used for acquiring a first user voice instruction. The camera is used for acquiring a user image under the condition that the first user voice instruction meets a preset condition. The processor is used for processing the user image based on the adjustment parameter corresponding to at least one cosmetic. The display screen is used for displaying the processed user image.
According to an embodiment of the present disclosure, the above-mentioned smart device further includes: and the transmission device is used for sharing the processed user image with a specific user in real time.
According to an embodiment of the present disclosure, the microphone is further configured to obtain a second user voice instruction. The processor is further configured to determine the at least one cosmetic product based on the second user voice instruction.
According to the embodiment of the disclosure, the camera is further configured to acquire current scene information, where the current scene information includes environment information where the user is currently located. The processor is further configured to determine the at least one cosmetic based on the current context information.
According to an embodiment of the present disclosure, the processor is further configured to: analyzing the user image, determining a user characteristic of the user, and determining the at least one cosmetic product based on the user characteristic.
According to an embodiment of the present disclosure, the processor is further configured to: the method includes the steps of obtaining browsing information of a user, wherein the browsing information comprises information related to cosmetics historically browsed or being browsed by the user, and determining at least one cosmetic based on the browsing information.
According to an embodiment of the present disclosure, the processor is further configured to: obtaining recommendation information, wherein the recommendation information comprises cosmetic information recommended by more than a preset threshold value and/or cosmetic information recommended by a specific user, and determining at least one cosmetic based on the recommendation information.
According to an embodiment of the present disclosure, the processing the user image based on the adjustment parameter corresponding to the at least one cosmetic product includes: the method comprises the steps of positioning key parts of a user, wherein the key parts comprise at least one of eyebrows, eyes, a nose bridge, a nose tip, cheekbones, a mouth corner, a middle of a person, cheeks and a chin, determining the key parts of the user corresponding to at least one cosmetic, and adjusting the corresponding key parts of the user based on an adjusting parameter corresponding to the at least one cosmetic.
According to an embodiment of the present disclosure, the transmitting device is further configured to receive feedback information of the specific user. The processor is further configured to re-determine the at least one cosmetic based on the feedback information of the user.
Another aspect of the present disclosure provides an information processing system of an intelligent device, including: one or more memories storing executable instructions and one or more processors executing the executable instructions to implement the methods described above.
Another aspect of the disclosure provides a computer readable medium having stored thereon executable instructions that when executed by a processor implement the method as described above.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the problems that the virtual makeup trial equipment in the prior art is usually fixed in position or complex in operation and poor in user experience feeling can be at least partially solved, and therefore the technical effects of being simple in operation, convenient to use and improving the user experience feeling can be achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of an information processing method and an intelligent device according to an embodiment of the present disclosure;
fig. 2 schematically shows a flow chart of an information processing method of a smart device according to an embodiment of the present disclosure;
fig. 3 schematically shows a flowchart of an information processing method of a smart device according to another embodiment of the present disclosure;
fig. 4A and 4B schematically illustrate a schematic diagram of a smart device according to an embodiment of the disclosure; and
FIG. 5 schematically shows a block diagram of an information processing system of a smart device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
The embodiment of the disclosure provides an information processing method of an intelligent device, wherein the intelligent device comprises a display device, an image acquisition device and a voice acquisition device. The method comprises the following steps: the method comprises the steps of obtaining a first user voice instruction through a voice obtaining device, obtaining a user image through an image obtaining device under the condition that the first user voice instruction meets a preset condition, processing the user image based on an adjusting parameter corresponding to at least one cosmetic, and displaying the processed user image through a display device.
Fig. 1 schematically illustrates an application scenario 100 of an information processing method and an intelligent device according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include a user 110 and a smart device 120.
According to the embodiment of the present disclosure, the smart device 120 may include a display device 121, an image acquisition device 122, and a voice acquisition device 123.
In the embodiment of the present disclosure, the voice acquiring device 123 may acquire a user voice instruction, and when the user voice instruction meets a preset condition, the image acquiring device 122 is turned on, and an image of the user is acquired through the image acquiring device 122. The smart device 120 may process the user image by itself, or may send the user image to another device, and the other device that receives the user image processes the user image and sends the processed image to the smart device 110. The processed image is then displayed by the display means 111 of the smart device 110.
For example, when the user 110 purchases on the internet, and wants to try the effect of some cosmetics, the smart device 120 may be controlled by voice to enter a makeup trial mode, so that the smart device 120 may start the image obtaining device to obtain the user image, then adjust the user image according to the effect of the cosmetics specified or recommended by the user, and display the image with the makeup effect through the display device 120 of the smart device, so that the user may view the virtual makeup trial effect, thereby facilitating the use of the user and improving the user experience.
It should be noted that fig. 1 is only an example of an application scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
Fig. 2 schematically shows a flowchart of an information processing method of an intelligent device according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S201 to S204. According to an embodiment of the present disclosure, a smart device may include a display device, an image acquisition device, and a voice acquisition device. The display device may be a display screen, the image capturing device may be a camera, and the voice capturing device may be a microphone.
In operation S201, a first user voice instruction is acquired through a voice acquisition device.
In operation S202, acquiring a user image through an image acquisition device when the first user voice instruction satisfies a preset condition;
in operation S203, the user image is processed based on the adjustment parameter corresponding to the at least one cosmetic.
In operation S204, the processed user image is displayed through the display device.
The smart device in the embodiment of the present disclosure may have a voice obtaining apparatus, and may be a smart device such as a smart audio. The user can control the intelligent device through voice instructions, for example, the user can control the intelligent device to enter a makeup trial mode through the voice instructions, and the like.
According to the embodiment of the disclosure, the image acquisition device can be started under the condition that the first user voice instruction acquired by the voice acquisition device meets the preset condition, and the user image is acquired through the image acquisition device. The preset condition may be, for example, "start trying to make up", "enter trying to make up mode", etc., and the disclosure does not limit this, and a person skilled in the art may set the preset condition according to actual situations.
In the embodiment of the disclosure, the user image may be processed based on the adjustment parameter corresponding to at least one cosmetic product to generate an image with a makeup effect. The at least one cosmetic may be a cosmetic for which the user specifies a makeup test, or a cosmetic for which a makeup test is recommended.
Specifically, the at least one cosmetic may be determined based on the second user voice instruction by obtaining the second user voice instruction.
For example, the user may specify the cosmetic by voice instruction. For example, the user may express "try a lipstick of a bean paste color" (by way of example only), and then may determine that at least one cosmetic includes the bean paste color lipstick, and then adjust the user image based on the adjustment parameters corresponding to the bean paste color lipstick to obtain an image with the user's effect of applying the bean paste color lipstick. In the embodiment of the disclosure, the eye shadow of the user matched with the red lipstick in bean paste color can be recommended, and then the effects of the red lipstick and the eye shadow are superposed on the user image, so that the user can obtain better experience.
The embodiment of the disclosure can also acquire current scene information, the current scene information includes the current environment information of the user, and at least one cosmetic is determined based on the current scene information.
For example, the smart device may acquire the ambient environment information through the image acquisition means, and may determine the appropriate cosmetic from the ambient environment information. For example, if the surrounding environment is bright, a bright-colored cosmetic may be determined, and if the surrounding environment is dim, a modesty-colored cosmetic may be determined. Alternatively, the appropriate cosmetic may be determined from information of other users in the current scene. For example, if the wear of other users in the current scene is more formal, a relatively formal cosmetic may be determined, and if the wear of other users in the current scene is more casual, a relatively casual cosmetic may be determined.
The disclosed embodiments may also analyze the user image, determine a user characteristic of the user, and determine at least one cosmetic based on the user characteristic.
For example, the user's face or skin tone may be analyzed to determine the appropriate cosmetic product for the user. For example, a suitable eyebrow pencil or highlight may be determined based on the user's face, a suitable blush may be determined based on the user's skin tone, and so forth. The embodiment of the disclosure can recommend the appropriate cosmetics for the user by analyzing the facial features of the user.
The embodiment of the disclosure can also acquire browsing information of a user, and determine at least one cosmetic based on the browsing information. Wherein the browsing information includes information related to cosmetics that the user has historically browsed or is browsing.
For example, if the user is browsing the webpage content of a certain lipstick and the user instructs to start makeup trying by voice, the lipstick effect that the user is browsing can be displayed on the user image in an overlapping manner directly. Alternatively, the cosmetics that are frequently browsed or recently browsed among the cosmetics historically browsed by the user may be determined as the recommended cosmetics.
The embodiment of the disclosure can also acquire recommendation information and determine at least one cosmetic based on the recommendation information. The recommendation information comprises more than preset threshold value of cosmetic information recommended by the user and/or cosmetic information recommended by a specific user.
For example, cosmetics recommended by most users may be recommended to the user, or cosmetics recommended by a net red or a star or a blogger concerned by the user may also be recommended to the user, or cosmetics recommended by friends of the user may also be recommended to the user.
According to an embodiment of the present disclosure, processing the user image based on the adjustment parameter corresponding to the at least one cosmetic product may include separating the user face from the environmental background, thereby accurately locating the user's face contour. And carrying out face alignment on the face position part, and clearly marking out the face outline of the user so as to track the moving face.
In the embodiment of the disclosure, key parts of the user can be positioned, wherein the key parts comprise at least one of eyebrows, eyes, a nose bridge, a nose tip, cheekbones, a mouth corner, a middle part of a person, cheeks and a chin, so that makeup can be performed on the key parts. For example, user key portions corresponding to at least one cosmetic product may be determined, and the corresponding user key portions may be adjusted based on adjustment parameters corresponding to the at least one cosmetic product.
For example, if the determined cosmetic product is lipstick, the user's lips are adjusted based on the adjustment parameter corresponding to lipstick. And if the determined cosmetic is an eyebrow pencil, adjusting the eyebrow part of the user based on the adjustment parameter corresponding to the eyebrow pencil.
The intelligent equipment of the embodiment of the disclosure can enter a makeup trial mode by receiving the voice instruction of the user when the user needs to make up trial, so that the intelligent equipment can start the image acquisition device to acquire the image of the user, then adjust the image of the user according to the effect of cosmetics specified or recommended by the user, and display the image with the makeup effect through the display device of the intelligent equipment, so that the user can view the virtual makeup trial effect, the use of the user is facilitated, and the user experience is improved. It can be understood that when a user shops at the street, the user usually communicates with a counter service staff through language to determine a suitable cosmetic and then tries to make up, the intelligent device of the embodiment of the disclosure can give the user a suitable cosmetic through a voice instruction or recommendation of the user, and then realizes virtual makeup trying, so that the intelligent device brings convenience to the user and also gives the user a feeling of being closer to reality.
Fig. 3 schematically shows a flowchart of an information processing method of a smart device according to another embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S201 to S204 and operations S301 to S303. Operations S201 to S204 are the same as or similar to the method described above with reference to fig. 2, and are not repeated herein.
In operation S301, the processed user image is shared with a specific user in real time.
In operation S302, feedback information of a specific user is received.
In operation S303, at least one cosmetic is newly determined based on the feedback information of the user.
It will be appreciated that users often wish to have friends available when trying to make up. The embodiment of the disclosure can share the image with the makeup effect with the friends of the user in real time, so that the user can discuss with the friends whether the effect of the cosmetics is appropriate or not in time, and the user can conveniently purchase the reference.
According to the embodiment of the disclosure, a video call between a user and a friend can be established, the image with the makeup effect of the user is sent to the friend in real time, the user and the friend can discuss the effect of cosmetics in real time, the life is displayed in a more fitting manner, and better experience is provided for the user.
In the embodiment of the disclosure, after the user establishes the video call with the friend, the makeup trial interface and the video interface can be simultaneously displayed on the display device of the intelligent device. For example, a makeup trial interface of the user may be displayed on the main interface of the display device, and a video interface may be displayed on the sub-interface, where the makeup trial interface may display, for example, processed images of the user (i.e., images of the user with a makeup effect), the video interface may display images of friends of the user, or the video interface may display images of the user without processing (i.e., images of the user without a makeup effect).
According to the embodiment of the disclosure, the main interface of the display device may be located in a first display area of the display device, and the video interface may be located in a second display area of the display device, wherein the first display area and the second display area may or may not overlap. In some embodiments of the present disclosure, when the first display region overlaps the second display region, the display region disposed above may or may not have a certain transparency. For example, the first display area may be a whole display area of the display device, the first display area may display a user image with a makeup effect, the second display area may be a display area of an upper left corner/a lower left corner/an upper right corner/a lower right corner (just an example) of the display device, the second display area may be disposed above the first display area and have a certain transparency or no transparency, and the second display area may display a video image of a friend of the user, or may display a video image of the user without a makeup effect.
The intelligent device in the embodiment of the present disclosure may further receive feedback information of the user and the friend, for example, voice information or text information fed back by the friend. For example, "the lipstick is too heavy," the lighter lipstick may be re-determined to make virtual makeup for the user based on the feedback information. Or the feedback information of the friend can also be that a certain type of cosmetics is recommended, and the intelligent device can determine the corresponding cosmetics to make up virtually for the user according to the recommendation of the friend, so that the user experience can be further improved.
According to the embodiment of the disclosure, after a video call is established between a user and a friend, the friend of the user can receive the image with the makeup effect of the user in real time, so that the friend can provide feedback information according to the makeup effect, the intelligent device of the user can replace corresponding cosmetics according to the feedback information of the friend, the makeup effect is adjusted, the adjusted makeup effect image is also sent to the friend in real time, and the user and the friend can communicate and watch in real time.
The embodiment of the disclosure can share the makeup effect of the user and the friends of the user in real time, and the user and the friends can communicate the sharing suggestions in time, so that the user can more fit the real life and the user experience is improved.
The embodiment of the disclosure can also determine a suitable cosmetic based on the feedback or opinion of friends of the user to make up the user virtually, thereby further improving the user experience.
Fig. 4A and 4B schematically illustrate a schematic diagram of a smart device 400 according to an embodiment of the disclosure.
As shown in fig. 4A, the smart device 400 includes a microphone 410, a camera 420, a processor 430, and a display screen 440.
The microphone 410 is used to capture a first user voice instruction.
The camera 420 is configured to obtain a user image when the first user voice instruction satisfies a preset condition.
Processor 430 is configured to process the user image based on the adjustment parameter corresponding to the at least one cosmetic product.
The display screen 440 is used to display the processed user image.
The microphone 410 is also used to retrieve second user voice instructions in accordance with an embodiment of the present disclosure. Processor 430 is also configured to determine at least one cosmetic product based on the second user voice instruction.
According to the embodiment of the present disclosure, the camera 420 is further configured to obtain current scene information, where the current scene information includes environment information where the user is currently located. Processor 430 is also configured to determine at least one cosmetic based on the current context information.
According to an embodiment of the present disclosure, processor 430 is further configured to analyze the user image, determine a user characteristic of the user, and determine at least one cosmetic product based on the user characteristic.
According to the embodiment of the disclosure, the processor 430 is further configured to obtain browsing information of the user, wherein the browsing information includes information related to cosmetics historically browsed or being browsed by the user, and determine at least one cosmetic based on the browsing information.
According to the embodiment of the present disclosure, the processor 430 is further configured to obtain recommendation information, where the recommendation information includes more than a preset threshold of cosmetic information recommended by a user and/or cosmetic information recommended by a specific user, and determine at least one cosmetic based on the recommendation information.
According to an embodiment of the present disclosure, the processing the user image based on the adjustment parameter corresponding to the cosmetic includes: the method comprises the steps of positioning key parts of a user, wherein the key parts comprise at least one of eyebrows, eyes, a nose bridge, a nose tip, cheekbones, a mouth corner, a middle part, a cheek and a chin, determining the key parts of the user corresponding to at least one cosmetic, and adjusting the corresponding key parts of the user based on an adjusting parameter corresponding to at least one cosmetic.
According to the embodiment of the present disclosure, the smart device shown in fig. 4A may implement the method described above with reference to fig. 2, for example, and is not described herein again.
The intelligent equipment of the embodiment of the disclosure can enter a makeup trial mode by receiving the voice instruction of the user when the user needs to make up trial, so that the intelligent equipment can start the image acquisition device to acquire the image of the user, then adjust the image of the user according to the effect of cosmetics specified or recommended by the user, and display the image with the makeup effect through the display device of the intelligent equipment, so that the user can view the virtual makeup trial effect, the use of the user is facilitated, and the user experience is improved. It can be understood that when a user shops at the street, the user usually communicates with a counter service staff through language to determine a suitable cosmetic and then tries to make up, the intelligent device of the embodiment of the disclosure can give the user a suitable cosmetic through a voice instruction or recommendation of the user, and then realizes virtual makeup trying, so that the intelligent device brings convenience to the user and also gives the user a feeling of being closer to reality.
As shown in fig. 4B, the smart device 400 may further include a transmitting device 450.
The transmission device 450 is used for real-time sharing the processed user image with a specific user.
According to the embodiment of the present disclosure, the transmitting device 450 is further configured to receive feedback information of a specific user. The processor 430 is also configured to re-determine at least one cosmetic product based on the feedback information of the user.
According to the embodiment of the present disclosure, the smart device shown in fig. 4B may implement the method described above with reference to fig. 3, for example, and is not described herein again.
The embodiment of the disclosure can share the makeup effect of the user and the friends of the user in real time, and the user and the friends can communicate the sharing suggestions in time, so that the user can more fit the real life and the user experience is improved.
The embodiment of the disclosure can also determine a suitable cosmetic based on the feedback or opinion of friends of the user to make up the user virtually, thereby further improving the user experience.
FIG. 5 schematically shows a block diagram of an information handling system suitable for implementing the above described method according to an embodiment of the present disclosure. The information processing system shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, an information processing system 500 according to an embodiment of the present disclosure includes a processor 501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. Processor 501 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 503, various programs and data necessary for the operation of the system 500 are stored. The processor 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, system 500 may also include an input/output (I/O) interface 505, input/output (I/O) interface 505 also being connected to bus 504. The system 500 may also include one or more of the following components connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program, when executed by the processor 501, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable medium, which may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer readable medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, a computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
For example, according to embodiments of the present disclosure, a computer-readable medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (20)

1. An information processing method of an intelligent device, wherein the intelligent device comprises a display device, an image acquisition device and a voice acquisition device, and the method comprises the following steps:
acquiring a first user voice instruction through the voice acquisition device;
acquiring a user image through the image acquisition device under the condition that the first user voice instruction meets a preset condition;
processing the user image based on an adjustment parameter corresponding to at least one cosmetic; and
and displaying the processed user image through the display device.
2. The method of claim 1, further comprising:
and sharing the processed user image with a specific user in real time.
3. The method of claim 1, further comprising:
acquiring a second user voice instruction;
determining the at least one cosmetic product based on the second user voice instruction.
4. The method of claim 1, further comprising:
acquiring current scene information, wherein the current scene information comprises the current environment information of a user;
determining the at least one cosmetic based on the current context information.
5. The method of claim 1, further comprising:
analyzing the user image and determining the user characteristics of the user;
determining the at least one cosmetic product based on the user characteristic.
6. The method of claim 1, further comprising:
acquiring browsing information of a user, wherein the browsing information comprises information related to cosmetics historically browsed or browsed by the user;
determining the at least one cosmetic based on the browsing information.
7. The method of claim 1, further comprising:
acquiring recommendation information, wherein the recommendation information comprises cosmetic information recommended by a user and/or cosmetic information recommended by a specific user, and the cosmetic information is more than a preset threshold value;
determining the at least one cosmetic based on the recommendation information.
8. The method of claim 1, wherein the processing the user image based on the adjustment parameter corresponding to the at least one cosmetic product comprises:
positioning key parts of a user, wherein the key parts comprise at least one of eyebrows, eyes, nose bridges, nose tips, cheekbones, mouth corners, middle of the human body, cheeks and chin;
determining a user key part corresponding to the at least one cosmetic;
and adjusting corresponding key parts of the user based on the corresponding adjustment parameters of the at least one cosmetic.
9. The method of claim 2, further comprising:
receiving feedback information of the specific user;
re-determining the at least one cosmetic based on the feedback information of the user.
10. A smart device, comprising:
the microphone is used for acquiring a first user voice instruction;
the camera is used for acquiring a user image under the condition that the first user voice instruction meets a preset condition;
a processor for processing the user image based on an adjustment parameter corresponding to at least one cosmetic;
and the display screen is used for displaying the processed user image.
11. The smart device of claim 10, further comprising:
and the transmission device is used for sharing the processed user image with a specific user in real time.
12. The smart device of claim 10, wherein:
the microphone is also used for acquiring a second user voice instruction;
the processor is further configured to determine the at least one cosmetic product based on the second user voice instruction.
13. The smart device of claim 10, wherein:
the camera is further used for acquiring current scene information, and the current scene information comprises the current environment information of the user;
the processor is further configured to determine the at least one cosmetic based on the current context information.
14. The smart device of claim 10, wherein the processor is further configured to:
analyzing the user image and determining the user characteristics of the user;
determining the at least one cosmetic product based on the user characteristic.
15. The smart device of claim 10, wherein the processor is further configured to:
acquiring browsing information of a user, wherein the browsing information comprises information related to cosmetics historically browsed or browsed by the user;
determining the at least one cosmetic based on the browsing information.
16. The smart device of claim 10, wherein the processor is further configured to:
acquiring recommendation information, wherein the recommendation information comprises cosmetic information recommended by a user and/or cosmetic information recommended by a specific user, and the cosmetic information is more than a preset threshold value;
determining the at least one cosmetic based on the recommendation information.
17. The smart device of claim 10, wherein the processing the user image based on the adjustment parameter corresponding to the at least one cosmetic product comprises:
positioning key parts of a user, wherein the key parts comprise at least one of eyebrows, eyes, nose bridges, nose tips, cheekbones, mouth corners, middle of the human body, cheeks and chin;
determining a user key part corresponding to the at least one cosmetic;
and adjusting corresponding key parts of the user based on the corresponding adjustment parameters of the at least one cosmetic.
18. The smart device of claim 11, wherein:
the transmission device is further used for receiving feedback information of the specific user;
the processor is further configured to re-determine the at least one cosmetic based on the feedback information of the user.
19. An information processing system of a smart device, comprising:
one or more memories storing executable instructions; and
one or more processors executing the executable instructions to implement the method of any one of claims 1-9.
20. A computer readable medium having stored thereon executable instructions which, when executed by a processor, implement a method according to any one of claims 1 to 9.
CN201811201107.7A 2018-10-15 2018-10-15 Information processing method of intelligent device and intelligent device Pending CN111047384A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811201107.7A CN111047384A (en) 2018-10-15 2018-10-15 Information processing method of intelligent device and intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811201107.7A CN111047384A (en) 2018-10-15 2018-10-15 Information processing method of intelligent device and intelligent device

Publications (1)

Publication Number Publication Date
CN111047384A true CN111047384A (en) 2020-04-21

Family

ID=70230452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811201107.7A Pending CN111047384A (en) 2018-10-15 2018-10-15 Information processing method of intelligent device and intelligent device

Country Status (1)

Country Link
CN (1) CN111047384A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115617A (en) * 2020-08-27 2022-03-01 华为技术有限公司 Display method applied to electronic equipment and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682958A (en) * 2016-11-21 2017-05-17 汕头市智美科技有限公司 Method and device for trying on makeup virtually
US20170178220A1 (en) * 2015-12-21 2017-06-22 International Business Machines Corporation Personalized expert cosmetics recommendation system using hyperspectral imaging
CN107392713A (en) * 2017-07-21 2017-11-24 汕头市智美科技有限公司 A kind of virtual examination adornment equipment
CN107765858A (en) * 2017-11-06 2018-03-06 广东欧珀移动通信有限公司 Determine the method, apparatus, terminal and storage medium of facial angle
CN108062400A (en) * 2017-12-25 2018-05-22 深圳市美丽控电子商务有限公司 Examination cosmetic method, smart mirror and storage medium based on smart mirror
CN108171143A (en) * 2017-12-25 2018-06-15 深圳市美丽控电子商务有限公司 Makeups method, smart mirror and storage medium
CN108519816A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Information processing method, device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170178220A1 (en) * 2015-12-21 2017-06-22 International Business Machines Corporation Personalized expert cosmetics recommendation system using hyperspectral imaging
CN106682958A (en) * 2016-11-21 2017-05-17 汕头市智美科技有限公司 Method and device for trying on makeup virtually
CN107392713A (en) * 2017-07-21 2017-11-24 汕头市智美科技有限公司 A kind of virtual examination adornment equipment
CN107765858A (en) * 2017-11-06 2018-03-06 广东欧珀移动通信有限公司 Determine the method, apparatus, terminal and storage medium of facial angle
CN108062400A (en) * 2017-12-25 2018-05-22 深圳市美丽控电子商务有限公司 Examination cosmetic method, smart mirror and storage medium based on smart mirror
CN108171143A (en) * 2017-12-25 2018-06-15 深圳市美丽控电子商务有限公司 Makeups method, smart mirror and storage medium
CN108519816A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Information processing method, device, storage medium and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114115617A (en) * 2020-08-27 2022-03-01 华为技术有限公司 Display method applied to electronic equipment and electronic equipment
WO2022042163A1 (en) * 2020-08-27 2022-03-03 华为技术有限公司 Display method applied to electronic device, and electronic device
CN114115617B (en) * 2020-08-27 2024-04-12 华为技术有限公司 Display method applied to electronic equipment and electronic equipment

Similar Documents

Publication Publication Date Title
US9640218B2 (en) Physiological cue processing
WO2020018607A1 (en) Dynamic digital content processing and generation for a virtual environment
CN110072047B (en) Image deformation control method and device and hardware device
CN112489169B (en) Portrait image processing method and device
US20230386001A1 (en) Image display method and apparatus, and device and medium
CN111369461A (en) Beauty parameter adjusting method and device and electronic equipment
CN113453027B (en) Live video and virtual make-up image processing method and device and electronic equipment
CN115311178A (en) Image splicing method, device, equipment and medium
US20240118787A1 (en) Video generating method and apparatus, and terminal device and storage medium
CN112560540A (en) Beautiful makeup putting-on recommendation method and device
CN111047384A (en) Information processing method of intelligent device and intelligent device
CN116684394A (en) Media content processing method, apparatus, device, readable storage medium and product
KR102623148B1 (en) Electronic apparatus and controlling method thereof
CN110992276A (en) Image processing method, device, medium and electronic equipment
US20240135972A1 (en) Image processing method, apparatus, device and storage medium
CN111260756A (en) Method and apparatus for transmitting information
CN113223128B (en) Method and apparatus for generating image
CN114333018A (en) Shaping information recommendation method and device and electronic equipment
CN113610723A (en) Image processing method and related device
WO2024125328A1 (en) Live-streaming image frame processing method and apparatus, and device, readable storage medium and product
CN110678904A (en) Beauty treatment method and device, unmanned aerial vehicle and handheld platform
CN111489769B (en) Image processing method, device and hardware device
CN111259696B (en) Method and device for displaying image
CN117499609A (en) Media content color adjustment method, device, equipment, storage medium and product
US20180255235A1 (en) Image processing device and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination