WO2022017270A1 - 外表分析的方法和电子设备 - Google Patents

外表分析的方法和电子设备 Download PDF

Info

Publication number
WO2022017270A1
WO2022017270A1 PCT/CN2021/106703 CN2021106703W WO2022017270A1 WO 2022017270 A1 WO2022017270 A1 WO 2022017270A1 CN 2021106703 W CN2021106703 W CN 2021106703W WO 2022017270 A1 WO2022017270 A1 WO 2022017270A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
region
appearance
interest
Prior art date
Application number
PCT/CN2021/106703
Other languages
English (en)
French (fr)
Inventor
何小祥
胡宏伟
卢曰万
郜文美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21846466.7A priority Critical patent/EP4181014A4/en
Priority to US18/006,312 priority patent/US20230298300A1/en
Publication of WO2022017270A1 publication Critical patent/WO2022017270A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present disclosure relates to the field of electronic technology, and in particular, to a method and electronic device for external appearance analysis.
  • some smart terminal devices can collect images of objects (eg, people), and provide an appearance evaluation of the objects through image analysis.
  • objects eg, people
  • some smart phone applications can use the camera of the smart phone to capture an image of a human face and provide an evaluation of the skin state of the human face.
  • some smart mirrors can collect images of human faces through cameras, and provide scores or age estimates for the appearance of objects.
  • various types of existing smart terminal devices usually use front-facing cameras to collect images of users, which makes the image area collected by the smart terminal devices limited.
  • the collected image is usually a face photo of the front of the user, and it is difficult to collect specific regions of the left and right cheeks. This will seriously affect the accuracy of appearance evaluation.
  • the present disclosure provides an appearance analysis method and electronic device.
  • an embodiment of the present disclosure provides an appearance analysis method, which can be applied to an electronic device including a first camera and a second camera.
  • the method includes: the electronic device acquires a first image captured by a first camera and a second image captured by a second camera, wherein the first image is an image of a first area of a user, and the second image is a different image of the user Image of the second area.
  • the electronic device then provides an appearance evaluation of the subject, which appearance evaluation is determined based on the first image and the second image.
  • the appearance evaluation may be determined by the electronic device.
  • the appearance rating may be determined by a device other than the electronic device (eg, a server), for example.
  • embodiments of the present disclosure can improve the accuracy of the appearance evaluation provided.
  • embodiments of the present disclosure can simultaneously acquire multiple images of a subject, eg, left, front, and right face images of the subject, so that the appearance evaluation of the subject's face can be more accurately determined.
  • the electronic device may also determine whether the pose or posture of the object conforms to the image capturing conditions of the first camera or the second camera. If the position or attitude of the object does not meet the image acquisition conditions, the electronic device may prompt the object to adjust the position or attitude. Exemplarily, the electronic device may prompt the object to adjust the position to be closer or farther from the camera through voice. Alternatively, the electronic device may also visually prompt the object by presenting an animation on the device, such as tilting the head to the right.
  • the embodiments of the present disclosure can make the object be in a position or posture more suitable for image acquisition, so that a better object image can be obtained, and the accuracy of appearance evaluation can be further improved.
  • the electronic device may further adjust the acquisition parameters of at least one of the first camera and the second camera based on the characteristics of the object.
  • the acquisition parameters include at least one of a shooting angle and a focal length.
  • the embodiments of the present disclosure can dynamically adjust the acquisition parameters of the camera for different objects, avoid that the fixed acquisition parameters may not be suitable for specific objects, and thus can improve the universality of the solution.
  • the present disclosure can also improve the quality of the acquired object image, thereby improving the accuracy of appearance evaluation.
  • the first camera and the second camera are arranged symmetrically on opposite sides of the image capture device.
  • the first camera may be arranged on the far left of the image capturing device, and the second camera may be arranged on the corresponding far right.
  • the combination of the first camera and the second camera can capture a more comprehensive image of the object, thereby improving the accuracy of appearance evaluation. Since the first camera and the second camera are arranged symmetrically, the first image and the second image captured by the camera are also symmetrical.
  • the implementation of the present disclosure can also use the same image analysis engine to process the first image and the flipped second image by horizontally flipping the second image .
  • the implementation of the present disclosure can utilize the left-face analysis engine to process the left-face image and the flipped right-face image simultaneously, thereby reducing development costs.
  • the electronic device further includes a third camera, wherein the third camera is disposed on the image capture device at the same distance from the first camera and the second camera.
  • the electronic device may also acquire a third image captured by the third camera, where the third image may be about a third area of the object.
  • the first camera and the second camera can be symmetrically arranged on the left and right sides of the image capture device, and the third camera can be arranged on the central axis of the image capture device. , for example, at the top or bottom of the image acquisition device.
  • the appearance rating can be determined by the electronic device based on the first image and the second image. Specifically, the electronic device may determine the first region of interest from the first image, and determine the second region of interest from the second image. Wherein, the first region of interest represents the first group of appearance features of the object, and the second region of interest represents the second group of appearance features of the object. Subsequently, the electronic device may determine an appearance evaluation for an appearance characteristic of the subject based on at least the first region of interest and the second region of interest.
  • the electronic device may determine the corresponding region of interest by detecting multiple feature points in the image. For example, the electronic device may pre-store the correspondence between appearance features and a set of feature points, detect the corresponding feature points from the corresponding image, and determine the area surrounded by these feature points as the corresponding region of interest.
  • appearance features may include, but are not limited to, pore features, pigmentation characteristics, wrinkle characteristics, red zone characteristics, pigmentation characteristics, acne characteristics, dark circle characteristics, blackhead characteristics, or any combination thereof.
  • the electronic device may determine a first appearance evaluation corresponding to the overlapping region based on the first region of interest, and based on the first region of interest 2. Areas of interest, determining the second appearance evaluation corresponding to the overlapping area. Subsequently, the electronic device may determine an appearance evaluation for the appearance characteristic of the subject based on the first appearance evaluation and the second appearance evaluation.
  • the electronic device may, for example, calculate the average value of the first appearance evaluation and the second appearance evaluation, and use the average value as the appearance evaluation for the appearance feature.
  • the first appearance evaluation may be the number of pores in the overlapping area determined based on the first image
  • the second appearance evaluation may be the number of pores in the overlapping area determined based on the second image.
  • the electronic device may determine the average of the two pore numbers as the number of pores in the overlapping area.
  • the implementation of the present disclosure can determine the appearance evaluation for the overlapping area based on multiple images collected from different angles, thereby avoiding the problem of inaccurate appearance evaluation results caused by incomplete image collection.
  • the electronic device can present at least a three-dimensional model of the object.
  • the three-dimensional model may be generated by the electronic device based on at least the first image and the second image. The electronic device may then present the corresponding content in the appearance evaluation at different locations of the three-dimensional model.
  • implementations of the present disclosure can present the appearance evaluation of an object more intuitively.
  • the presentation angle of the three-dimensional model can also be changed in response to the user's operation, thereby facilitating the user to view the appearance evaluation of a specific area more conveniently.
  • the appearance evaluation includes at least one of a skin evaluation and an appearance score.
  • skin assessments to which the present disclosure is applicable may also include: pore assessments, pigmentation assessments, wrinkle assessments, red zone assessments, pigmentation assessments, acne assessments, dark circle assessments, blackhead assessments, other skin assessments that may be determined using image analysis, or A combination of any of the above.
  • an embodiment of the present disclosure provides a terminal device.
  • the terminal device comprises at least one computing unit; at least one memory coupled to the at least one computing unit and storing instructions for execution by the at least one computing unit, the instructions when executed by the at least one computing unit
  • the terminal device acquires the first image captured by the first camera and the second image captured by the second camera, wherein the first image is an image of the first area of the user, and the second image is an image of the user's first area. An image of a different second area of the user.
  • the terminal device then provides an appearance rating of the subject, the appearance rating being determined based on the first image and the second image.
  • the appearance evaluation may be determined by the terminal device.
  • the appearance evaluation may be determined, for example, by a device other than the terminal device (eg, a server).
  • the terminal device may be a smart terminal with computing capabilities, examples of which include but are not limited to: a desktop computer, a laptop computer, a tablet computer, a smart phone, a smart watch, smart glasses, or an e-book.
  • the first camera and the second camera may be a front camera or a rear camera built in the terminal device.
  • one of the first camera and the second camera may be a built-in camera, and the other may be an external camera communicatively connected with the terminal device.
  • both the first camera and the second camera may be external cameras that are communicatively connected to the terminal device.
  • embodiments of the present disclosure can improve the accuracy of the appearance evaluation provided.
  • embodiments of the present disclosure can simultaneously acquire multiple images of a subject, eg, left, front, and right face images of the subject, so that the appearance evaluation of the subject's face can be more accurately determined.
  • the terminal device may also determine whether the pose or posture of the object conforms to the image capturing conditions of the first camera or the second camera. If the position or posture of the object does not meet the image acquisition conditions, the terminal device can prompt the object to adjust the position or posture. Exemplarily, the terminal device may prompt the object to adjust the position to be closer or farther from the camera through voice. Alternatively, the terminal device may also intuitively prompt the object by presenting an animation on the device, such as tilting the head to the right.
  • the embodiments of the present disclosure can make the object be in a position or posture more suitable for image acquisition, so that a better object image can be obtained, and the accuracy of appearance evaluation can be further improved.
  • the terminal device may further adjust the acquisition parameters of at least one of the first camera and the second camera based on the characteristics of the object.
  • the acquisition parameters include at least one of a shooting angle and a focal length.
  • the embodiments of the present disclosure can dynamically adjust the acquisition parameters of the camera for different objects, avoid that the fixed acquisition parameters may not be suitable for specific objects, and thus can improve the universality of the solution.
  • the present disclosure can also improve the quality of the acquired object image, thereby improving the accuracy of appearance evaluation.
  • the terminal device may also acquire a third image captured by a third camera, where the third image may be a third area about the object.
  • the third camera may be a built-in camera of the terminal device or an external camera communicatively connected with the terminal device, and the third camera may be arranged at the same distance from the first camera and the second camera.
  • the appearance rating may be determined by the terminal device based on the first image and the second image. Specifically, the terminal device may determine the first region of interest from the first image, and determine the second region of interest from the second image. Wherein, the first region of interest represents the first group of appearance features of the object, and the second region of interest represents the second group of appearance features of the object. Subsequently, the terminal device may determine an appearance evaluation for the appearance characteristics of the object based on at least the first region of interest and the second region of interest.
  • the terminal device may determine the corresponding region of interest by detecting multiple feature points in the image. For example, the terminal device may pre-store the correspondence between appearance features and a set of feature points, detect the corresponding feature points from the corresponding image, and determine the area surrounded by these feature points as the corresponding area of interest.
  • appearance features may include, but are not limited to, pore features, pigmentation characteristics, wrinkle characteristics, red zone characteristics, pigmentation characteristics, acne characteristics, dark circle characteristics, blackhead characteristics, or any combination thereof.
  • the terminal device may determine a first appearance evaluation corresponding to the overlapping region based on the first region of interest, and based on the first region of interest 2. Areas of interest, determining the second appearance evaluation corresponding to the overlapping area. Then, the terminal device may determine an appearance evaluation for the appearance feature of the subject based on the first appearance evaluation and the second appearance evaluation.
  • the terminal device may, for example, calculate the average value of the first appearance evaluation and the second appearance evaluation, and use the average value as the appearance evaluation for the appearance feature.
  • the first appearance evaluation may be the number of pores in the overlapping area determined based on the first image
  • the second appearance evaluation may be the number of pores in the overlapping area determined based on the second image.
  • the terminal device may determine the average of the two pore numbers as the number of pores in the overlapping area.
  • the implementation of the present disclosure can determine the appearance evaluation for the overlapping area based on multiple images collected from different angles, thereby avoiding the problem of inaccurate appearance evaluation results caused by incomplete image collection.
  • the terminal device may present at least a three-dimensional model of the object.
  • the three-dimensional model may be generated by the terminal device based on at least the first image and the second image. Subsequently, the terminal device may present the corresponding content in the appearance evaluation at different positions of the three-dimensional model.
  • implementations of the present disclosure can present the appearance evaluation of an object more intuitively.
  • the presentation angle of the three-dimensional model can also be changed in response to the user's operation, thereby facilitating the user to view the appearance evaluation of a specific area more conveniently.
  • the appearance evaluation includes at least one of a skin evaluation and an appearance score.
  • skin assessments to which the present disclosure is applicable may also include: pore assessments, pigmentation assessments, wrinkle assessments, red zone assessments, pigmentation assessments, acne assessments, dark circle assessments, blackhead assessments, other skin assessments that may be determined using image analysis, or A combination of any of the above.
  • an implementation of the present disclosure provides an image acquisition device.
  • the image acquisition device includes a first camera, a second camera and a communication component.
  • the first camera is configured to capture a first image associated with a first region of the object and the second camera is configured to capture a second image associated with a second region of the object, wherein the first region is different from the second region
  • the communication component is configured to provide the first image and the second image to the terminal device for use in determining the appearance evaluation of the subject.
  • the image capturing device provided by the present disclosure can capture the image of the object more comprehensively, thereby improving the accuracy of the determined appearance evaluation.
  • the first camera and the second camera are arranged symmetrically on opposite sides of the image capture device.
  • the first camera may be arranged on the far left of the image capturing device, and the second camera may be arranged on the corresponding far right.
  • the combination of the first camera and the second camera can capture a more comprehensive image of the object, thereby improving the accuracy of appearance evaluation. Since the first camera and the second camera are arranged symmetrically, the first image and the second image captured by the camera are also symmetrical.
  • the implementation of the present disclosure can also use the same image analysis engine to process the first image and the flipped second image by horizontally flipping the second image .
  • the implementation of the present disclosure can utilize the left-face analysis engine to process the left-face image and the flipped right-face image simultaneously, thereby reducing development costs.
  • the image capture device further includes a third camera, wherein the third camera is positioned at the same distance from the first camera and the second camera.
  • the image capturing device may also provide the third image captured by the third camera through the communication part.
  • the first camera and the second camera can be symmetrically arranged on the left and right sides of the image capture device, and the third camera can be arranged on the central axis of the image capture device. , for example, at the top or bottom of the image acquisition device.
  • the appearance evaluation includes at least one of a skin evaluation and an appearance score.
  • skin assessments to which the present disclosure is applicable may also include: pore assessments, pigmentation assessments, wrinkle assessments, red zone assessments, pigmentation assessments, acne assessments, dark circle assessments, blackhead assessments, other skin assessments that may be determined using image analysis, or A combination of any of the above.
  • embodiments of the present disclosure provide an appearance analysis system, including: a terminal device according to the second aspect and an image acquisition device according to the third aspect.
  • an appearance analysis apparatus including: the appearance analysis apparatus may include a first image acquisition unit, a second image acquisition unit, and an evaluation providing unit.
  • the first image acquisition unit is configured to acquire a first image associated with the first region of the object, wherein the first image is acquired by the first camera.
  • the second image acquisition unit is configured to acquire a second image associated with a second area of the object, the second image being acquired by the second camera, wherein the first area is different from the second area.
  • the evaluation providing unit is configured to provide an appearance evaluation of the subject, wherein the appearance evaluation is determined based on the first image and the second image.
  • the appearance analysis apparatus further includes: an object prompting unit, configured to: if the position or posture of the object does not meet the image capturing conditions of the first camera or the second camera, the electronic device prompts the object to adjust the position or posture.
  • the appearance analysis apparatus further includes: a camera adjustment unit configured to: based on the characteristics of the object, the electronic device adjusts acquisition parameters of at least one of the first camera and the second camera, where the acquisition parameters include a shooting angle and a focal length at least one of.
  • a camera adjustment unit configured to: based on the characteristics of the object, the electronic device adjusts acquisition parameters of at least one of the first camera and the second camera, where the acquisition parameters include a shooting angle and a focal length at least one of.
  • the first camera and the second camera are arranged symmetrically on opposite sides of the image capture device.
  • the electronic device further includes a third camera
  • the appearance analysis apparatus further includes: a third image acquisition unit configured to acquire a third image associated with a third area of the object, the third image being captured by the third camera acquisition, wherein the third camera is set on the image acquisition device, and the distance from the first camera and the second camera is the same.
  • the appearance analysis apparatus further includes an evaluation determination unit configured to: determine a first region of interest from the first image, the first region of interest representing a first set of appearance features of the object; determine a first region of interest from the second image; two regions of interest, the second region of interest representing a second set of appearance features of the object; and determining an appearance evaluation for the appearance features of the object based on at least the first region of interest and the second region of interest.
  • the evaluation determination unit is further configured to: if the first region of interest and the second region of interest include overlapping regions: determine a first appearance evaluation corresponding to the overlapping region based on the first region of interest; based on the second region of interest, determining a second appearance evaluation corresponding to the overlapping region; and determining an appearance evaluation for the appearance feature of the subject based on the first appearance evaluation and the second appearance evaluation.
  • the evaluation providing unit is further configured to: present a three-dimensional model of the object, wherein the three-dimensional model is generated based on at least the first image and the second image; and present correspondence in the appearance evaluation at different locations of the three-dimensional model content.
  • the appearance evaluation includes at least one of a skin evaluation and an appearance score.
  • skin assessments to which the present disclosure is applicable may also include: pore assessments, pigmentation assessments, wrinkle assessments, red zone assessments, pigmentation assessments, acne assessments, dark circle assessments, blackhead assessments, other skin assessments that may be determined using image analysis, or A combination of any of the above.
  • a sixth aspect provides a computer-readable storage medium on which one or more computer instructions are stored, wherein the one or more computer instructions are executed by a processor to implement the first aspect or any one of the first aspects. method in method.
  • a seventh aspect provides a computer program product that, when the computer program product runs on a computer, causes the computer to execute part or all of the steps of the method in the first aspect or any one of the implementations of the first aspect instruction.
  • the apparatus for analyzing the appearance of the fifth aspect, the computer storage medium described in the sixth aspect, or the computer program product described in the seventh aspect provided above are all used to execute the method provided in the first aspect. Therefore, the explanations or descriptions about the first aspect are also applicable to the fifth, sixth and seventh aspects.
  • the beneficial effects that can be achieved by the fifth aspect, the sixth aspect and the seventh aspect reference may be made to the beneficial effects in the corresponding methods, which will not be repeated here.
  • a smart mirror comprising: a first camera; a second camera, the first camera and the second camera are symmetrically arranged on opposite sides of the smart mirror; a third camera, the third camera and the first camera The camera and the second camera are at the same distance; at least one computing unit; at least one memory, the at least one memory is coupled to the at least one computing unit and stores instructions for execution by the at least one computing unit, the instructions when executed by the at least one computing unit , causing the smart mirror to perform actions, the actions include: acquiring a first image associated with the first area of the object, where the first image is captured by the first camera; acquiring a second image associated with the second area of the object, the first image two images are acquired by a second camera, wherein the first area is different from the second area; a third image is acquired associated with a third area of the object, the third image is acquired by the third camera; based on the first image , the second image, and the third image to determine an appearance evaluation of the subject; and providing an appearance
  • the smart mirror By acquiring the images respectively collected by the first camera, the second camera and the third camera, the smart mirror provided by the present disclosure can more accurately determine the appearance evaluation of the object, and can intuitively provide the user with the determined appearance through the smart mirror Evaluation.
  • FIG. 1A shows a schematic diagram of an example environment in which various embodiments of the present disclosure can be implemented
  • FIG. 1B illustrates an example electronic device according to one embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of an example electronic device according to some embodiments of the present disclosure
  • FIGS. 3A-3C illustrate schematic diagrams of example electronic devices according to further embodiments of the present disclosure.
  • FIGS. 4A-4D illustrate schematic diagrams of example electronic devices according to further embodiments of the present disclosure.
  • FIG. 5 shows a schematic diagram of the arrangement of cameras according to some embodiments of the present disclosure
  • 6A-6F illustrate schematic diagrams of a process of determining an appearance rating according to some embodiments of the present disclosure
  • FIG. 8 illustrates a flowchart of an example process for evaluating a subject's appearance, according to some embodiments of the present disclosure
  • FIG. 9 illustrates a flowchart of an example process for evaluating human facial skin, according to some embodiments of the present disclosure.
  • FIG. 10 is a schematic block diagram of an appearance analysis device according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 12 is a block diagram of a software structure of an electronic device according to an embodiment of the present application.
  • the term “comprising” and the like should be understood as open-ended inclusion, ie, “including but not limited to”.
  • the term “based on” should be understood as “based at least in part on”.
  • the terms “one embodiment” or “the embodiment” should be understood to mean “at least one embodiment”.
  • the terms “first”, “second”, etc. may refer to different or the same objects. Other explicit and implicit definitions may also be included below.
  • some smart terminal devices are able to capture images of objects (eg, people) and provide an evaluation of the appearance of the objects through image analysis.
  • objects eg, people
  • some smart phone applications can use the camera of the smart phone to capture an image of a human face and provide an evaluation of the skin state of the human face.
  • some smart mirrors can collect images of human faces through cameras, and provide scores or age estimates for the appearance of objects.
  • various types of existing smart terminal devices usually use front-facing cameras to collect images of users, which makes the image area collected by the smart terminal devices limited.
  • the collected image is usually a face photo of the front of the user, and it is difficult to collect specific regions of the left and right cheeks. This will seriously affect the accuracy of appearance evaluation.
  • the electronic device acquires a first image associated with a first region of the object and a second image associated with a second region of the object, wherein the first image is captured by a first camera, The second image is captured by the second camera, and the first area is different from the second area. Subsequently, the electronic device provides an appearance evaluation of the subject, wherein the appearance evaluation is determined based on the first image and the second image. Based on this manner, the embodiments of the present disclosure can capture a more comprehensive image of the object without imposing additional burden on the user, so as to provide a more accurate appearance evaluation.
  • FIG. 1A shows a schematic diagram of an example environment 100 in which various embodiments of the present disclosure can be implemented.
  • Environment 100 may include electronic device 120 .
  • the electronic device 120 includes a plurality of cameras 140 - 1 , 140 - 2 and 140 - 3 (referred to individually or collectively as cameras 140 ) and a presentation device 130 .
  • the electronic device 120 can acquire an image of the object 110 through the camera 140 and provide an appearance evaluation about the object 110 .
  • the electronic device 120 may be implemented as a device in the form of a mirror, eg, the electronic device 120 may be a smart mirror.
  • the plurality of cameras 140 may be arranged at different positions on the periphery of the electronic device 120, and the plurality of cameras 140 are arranged to have at least a predetermined distance therebetween. With this arrangement, the area of the object 110 that can be captured by the camera 140 will not be exactly the same as the area of the object 110 captured by other cameras.
  • the electronic device 120 is implemented, for example, as a nearly circular device. It should be understood that the shape of the electronic device 120 shown in FIG. 1A is only illustrative, and can also be implemented in any other suitable shape, such as square, triangle, rounded rectangle, oval, and the like.
  • the plurality of cameras 140 may be embedded in the housing of the electronic device 120, for example. It should be understood that the plurality of cameras 120 may also be integrated into the electronic device 120 in any other suitable form, and the present disclosure is not intended to limit the integration manner of the cameras 140 .
  • the electronic device 120 may include fewer cameras, eg, only cameras disposed on the left and right sides of the electronic device 120 .
  • the electronic device 120 may further include more cameras to capture the image of the user 110 more comprehensively.
  • At least two of the plurality of cameras 140 may be symmetrically arranged on opposite sides of the electronic device 120 relative to the electronic device 120 .
  • the camera 140 - 1 may be arranged on the leftmost side of the electronic device 120 , for example, and the camera 140 - 3 is symmetrically arranged on the rightmost side of the electronic device 120 .
  • the camera 140-2 may be arranged at an equal distance from the camera 140-1 and the camera 140-2.
  • the camera 140 - 2 may be arranged at the top of the electronic device 120 , or the camera 140 - 2 may also be arranged at the bottom of the electronic device 120 .
  • electronic device 120 may also include presentation device 130 .
  • the presentation device 130 may be implemented, for example, as a suitable form of electronic display screen and used to present the graphical interface 150 .
  • the electronic device 120 may, for example, visually provide an appearance rating 160, such as "skin score: 80 points".
  • graphical interface 150 may also present a visual representation of object 110 .
  • Various presentation manners of the graphical interface 150 will be discussed in detail below, and will not be described in detail here.
  • presentation device 130 may include mirror area 170 and electronic display area 180 .
  • Electronic display area 180 may be integrated into presentation device 130 in a suitable manner.
  • electronic display area 180 may be designed separately from mirror area 170 such that mirror area 170 presents an image of object 110 through mirror reflection, and electronic display area 180 may simultaneously display corresponding digital content, eg, appearance evaluations 160.
  • the electronic display area 180 may also be disposed on the back of the mirror surface area 170, so that the electronic display area 180 will only present the corresponding digital content when the display is powered on.
  • the area corresponding to the electronic display area 180 in the mirror area 170 does not have mirror reflection or the mirror reflection is weak, so that the electronic display area 180 can present the digital content more clearly.
  • the rendering device 130 will be rendered as the full mirror area 170 .
  • the mirror surface 170 can display the object 110 through mirror reflection in real time, and the electronic display area 180 can present the appearance evaluation 160 of the object 110 .
  • the size of the electronic display area 180 can be reduced, thereby reducing the cost of the electronic device 120 and reducing the power consumption of the electronic device 120 .
  • the appearance evaluation 160 provided by the electronic device 120 may be, for example, an evaluation for the subject's skin, including but not limited to: overall skin evaluation, skin pigmentation evaluation, pore evaluation, wrinkle evaluation, red zone evaluation, pigmentation evaluation Evaluation, pimple evaluation, dark circle evaluation, blackhead evaluation, etc.
  • the appearance evaluation may also be an appearance score for the subject.
  • the appearance rating may be a "look" rating for the subject's face.
  • the appearance evaluation may also include a "face value" score for the facial features of the subject.
  • the electronic device 120 may play the appearance evaluation 160 through voice, or the electronic device 120 may also send the appearance evaluation 160 to the subject 110 through email, text message or other communication methods.
  • the generation process of the appearance evaluation 160 will be discussed in detail below, and will not be described in detail here.
  • subject 110 refers to a user using electronic device 120 for appearance evaluation.
  • the electronic device 120 can use the plurality of cameras 140 to capture photos of a specific area of the user.
  • the electronic device 120 can use the camera 140 to collect face photos of the user from different angles, and generate the appearance evaluation 160 by performing corresponding analysis on these photos.
  • object 110 may also be other suitable creatures, such as pets such as cats or dogs.
  • the electronic device 120 can collect photos of the pet from different angles, and provide the pet's appearance evaluation. It should be understood that the process of providing the appearance evaluation of the pet is similar to the process of providing the user's appearance evaluation. For the convenience of description, the solution of the present disclosure will be described below by taking the user as an example.
  • FIG. 1A and FIG. 1B are all schematic and are not intended to be limitations of the present disclosure.
  • FIGS. 1A and 1B The environment 100 in which embodiments of the present disclosure can be implemented has been described above in conjunction with FIGS. 1A and 1B , and an exemplary electronic device 120 is presented.
  • the electronic device 120 in FIGS. 1A and 1B is implemented in the form of a mirror, and other variations of the electronic device 120 will be described below in conjunction with FIGS. 2 , 3A-3C, and 4A-4D.
  • FIG. 2 illustrates an example electronic device 200 according to further embodiments of the present disclosure.
  • the electronic device 200 includes a physically separate presentation device 210 and an image capture device 220, wherein the image capture device 220 may include a plurality of cameras 230-1, 230-2 and 230-3 (referred to individually or collectively as camera 230).
  • the image capture device 220 is physically separate from the presentation device 210 and is connected 240 to the presentation device 210 by wire or wirelessly.
  • the electronic device 200 may acquire an image of the subject via the image capture device 220 and may provide an appearance evaluation 250 via the presentation device 210, for example.
  • a processing unit for controlling image acquisition and providing an assessment of appearance may be provided in any of presentation device 210 and image acquisition device 220 .
  • a processing unit may be provided in the presentation device 210 (eg, a terminal device with processing capability), and the electronic device 200 may use the processing unit to send an instruction to capture images using the camera 230 to the image capture device 220 .
  • the image capture device 220 may then provide the captured image to the presentation device 210 using the communication means provided, and perform analysis on the captured image using the processing unit to determine an appearance evaluation, and provide the appearance evaluation via the rendering device 210 .
  • the electronic device 200 may also, for example, transmit the captured images to a remote computing device (eg, a server) for appearance evaluation.
  • a remote computing device eg, a server
  • a processing unit may also be provided in the image capturing device 220, and the electronic device 200 may use the processing unit to send an instruction to capture an image to the camera 230, and perform analysis on the captured image by the processing unit to determine the appearance rating, and then providing the determined appearance rating to presentation device 210 via the communication component for presentation to the user.
  • the image capture device 220 may be arranged as a set of cameras 230 arranged in specific locations.
  • a set of cameras 230 may be integrated within the same housing to form an integrated image capture device 220 .
  • the image capture device 220 may also refer to a collective term for a group of separate cameras 230 that may communicate with the presentation device 210 and/or the processing unit individually, or collectively with the presentation device 210 and/or or processing unit communication.
  • the number and arrangement of the cameras 230 shown in FIG. 2 are only illustrative.
  • the arrangement of the camera 230 may be similar to the camera 140 as discussed above in connection with FIG. 1 , and the description will not be repeated here.
  • the solution of the present disclosure can, for example, build the image capture device 220 as an accessory available to the mobile terminal, thereby fully utilizing the computing and presentation capabilities of the existing mobile terminal.
  • 3A-3C illustrate example electronic devices according to further embodiments of the present disclosure.
  • FIG. 3A illustrates an example electronic device 300A according to further embodiments of the present disclosure.
  • electronic device 300A may be implemented as a mobile device 310 having a plurality of front-facing cameras 305-1, 305-2, and 305-3 (referred to individually or collectively as front-facing cameras 305).
  • the front cameras 305 are arranged to have at least a predetermined distance from each other, so that different front cameras 305 can capture images of different areas of the object.
  • the electronic device 310 may perform a corresponding analysis using the captured images to generate an appearance evaluation of the subject.
  • the electronic device 310 may also utilize the display screen to present the appearance evaluation.
  • the user can hold the mobile device 310 and run an application program installed on the mobile device 310, the application program enables the mobile device 310 to issue an instruction to capture the user's image by using the plurality of front cameras 305, and to Corresponding image analysis is performed on the acquired images to generate an appearance evaluation. Subsequently, the application can also present the generated appearance evaluation through an image interface displayed on the display screen of the electronic device 320 . In this way, the solution of the present disclosure can utilize the existing mobile devices whose front cameras satisfy the distribution to perform the appearance evaluation providing method of the present disclosure.
  • FIG. 3B illustrates an example electronic device 300B according to further embodiments of the present disclosure.
  • the electronic device 300B includes a mobile device 320 having a front-facing camera 315 - 3 , and a camera 315 - 1 and a camera 315 - 2 attached to the mobile device 320 .
  • attaching refers to securing a camera independent of the mobile device to the mobile device by a suitable means (eg, detachable or non-detachable).
  • the camera 315-1 and the camera 315-2 may be symmetrically arranged on both sides of the mobile device 320 to capture the image of the object more comprehensively. Additionally, camera 315-1 and camera 315-2 may be communicatively coupled with mobile device 320 through a wired or wireless connection to enable mobile device 320 to acquire images captured by camera 315-1 and camera 315-2. Further, the electronic device 320 may perform corresponding analysis using the images captured by the front camera 315-3 and the communicatively coupled cameras 315-1 and 315-2 to generate an appearance evaluation of the subject. The electronic device 320 may also utilize the display screen to present the appearance evaluation.
  • the user can fix the camera 315-1 and the camera 315-2, which are independent accessories, to the user's mobile device 320 by means of, for example, buckles, and establish a relationship between the cameras 315-1 and 315-2 and the mobile device 320. USB connection or Bluetooth connection between.
  • the user can hold the mobile device 320, for example, and can run an application program installed on the mobile device 320, the application program can detect the front camera 315-3 included in the mobile device 320 to be communicatively coupled with the mobile device 320. Cameras 315-1 and 315-2.
  • the application enables the mobile device 320 to issue an instruction to capture images of the user using the cameras 315-1, 315-2, and 315-3, and perform corresponding image analysis on the captured images to generate an appearance evaluation. Subsequently, the application can also present the generated appearance evaluation through an image interface displayed on the display screen of the electronic device 320 .
  • the solution of the present disclosure can improve the portability of the device, so that the user can quickly and conveniently obtain the appearance evaluation.
  • FIG. 3C illustrates an example electronic device 300C according to further embodiments of the present disclosure.
  • electronic device 300B includes mobile device 330 , and cameras 325 - 1 , 325 - 2 , and 325 - 3 (individually or collectively referred to as cameras 325 ) attached to mobile device 330 .
  • cameras 325 - 1 , 325 - 2 , and 325 - 3 (individually or collectively referred to as cameras 325 ) attached to mobile device 330 .
  • the cameras 325 - 1 and 325 - 2 may be symmetrically arranged on the left and right sides of the mobile device 330 , and the camera 315 - 3 may be arranged above the mobile device 330 .
  • Such an arrangement enables the camera 325 to capture an image of the subject more comprehensively.
  • the camera 325 may be communicatively coupled with the mobile device 330 through a wired or wireless connection to enable the mobile device 330 to acquire images captured by the camera 325 .
  • the electronic device 320 may perform corresponding analysis using the images captured by the communicatively coupled camera 325 to generate an appearance evaluation of the subject.
  • the electronic device 330 may also utilize the display screen to present the appearance evaluation.
  • the user can fix the cameras 325 - 1 , 325 - 2 and 325 - 3 as independent accessories to the user's mobile device 330 by means of, for example, buckles, and establish a connection between the plurality of cameras 325 and the mobile device 330 USB connection or Bluetooth connection.
  • a user may hold mobile device 330 , for example, and may run an application installed on mobile device 330 that is capable of detecting a plurality of cameras 325 communicatively coupled to mobile device 330 .
  • the application program enables the mobile device 330 to issue an instruction to capture an image of the user using the camera 325, and perform corresponding image analysis on the captured image to generate an appearance evaluation. Subsequently, the application can also present the generated appearance evaluation through an image interface displayed on the display screen of the electronic device 330 .
  • the solution of the present disclosure can further improve the compatibility of the mobile device by providing multiple cameras as independent accessories.
  • 4A-4D illustrate example electronic devices according to further embodiments of the present disclosure.
  • FIG. 4A illustrates an example electronic device 410 according to further embodiments of the present disclosure.
  • electronic device 410 may be implemented as a mobile device having multiple rear cameras 415-1, 415-2, and 415-3 (referred to individually or collectively as rear cameras 415).
  • the rear cameras 415 are arranged to have at least a predetermined distance from each other, so that different rear cameras 415 can capture images of different regions of the object.
  • the electronic device 410 may perform a corresponding analysis using the captured images to generate an appearance evaluation of the subject.
  • the electronic device 410 may also utilize the display screen 425 on the front of the mobile device to present the appearance evaluation.
  • FIG. 4C illustrates an example electronic device 430 according to further embodiments of the present disclosure.
  • the electronic device 430 includes a mobile device with a rear-facing camera 435, and a camera 440-1 and a camera 440-2 attached to the mobile device.
  • the cameras 440 - 1 and 440 - 2 may be symmetrically arranged on both sides of the mobile device to capture the image of the object more comprehensively.
  • camera 440-1 and camera 440-2 may be communicatively coupled with the mobile device through a wired or wireless connection to enable the mobile device to acquire images captured by camera 440-1 and camera 440-2.
  • the electronic device 430 may perform corresponding analysis using the images captured by the rear camera 435 and the communicatively coupled cameras 440-1 and 440-2 to generate an appearance evaluation of the subject.
  • the electronic device 430 may also utilize a display screen on the front of the mobile device to present an appearance rating (not shown).
  • FIG. 4D illustrates an example electronic device 450 according to further embodiments of the present disclosure.
  • electronic device 450 includes a mobile device, and cameras 455-1, 455-2, and 455-3 (individually or collectively referred to as cameras 455) attached to the mobile device.
  • cameras 455-1, 455-2, and 455-3 (individually or collectively referred to as cameras 455) attached to the mobile device.
  • the camera 455-1 and the camera 455-2 may be symmetrically arranged on the left and right sides of the mobile device, and the camera 455-3 may be arranged above the mobile device.
  • the camera 455 may be communicatively coupled with the mobile device through a wired or wireless connection to enable the mobile device to acquire images captured by the camera 455 .
  • the mobile device may perform corresponding analysis using images captured by the communicatively coupled camera 455 to generate an appearance evaluation of the subject.
  • the electronic device 455 may also utilize the display screen on the front of the mobile device to present the appearance rating.
  • the solution of the present disclosure can further improve the accuracy of appearance evaluation.
  • the user to be evaluated can hold the mobile device by himself.
  • the display screen is located on the back of the camera, based on such an arrangement, the solution of the present disclosure can also enable other users to conveniently hold the mobile device to capture the image of the user to be evaluated.
  • the process of generating the appearance evaluation of the object will be described below with reference to FIG. 5 and FIGS. 6A to 6F .
  • the following processes may be implemented by any of the electronic devices described in conjunction with FIGS. 1-4 .
  • the following describes the specific process of generating the appearance evaluation by taking the environment 100 shown in FIG. 1 as an example.
  • the electronic device 120 may determine whether the position or posture of the object 110 conforms to the image capturing conditions of the camera 140 .
  • the image acquisition condition may be, for example, an angle range or a distance range of the object 110 relative to the camera 140 .
  • the electronic device 120 may determine whether the angle or distance of the object 110 relative to each camera 140 meets the preset image capturing conditions, and provide the object 110 with a prompt to adjust the position or posture when determining that the image capturing conditions are not met. Exemplarily, after starting the image acquisition, if it is determined that the object 110 is too far away from the camera 140, the electronic device 120 may remind the object 110 to adjust the position through voice. For example, the electronic device 120 may remind the user "please come closer" by voice. Alternatively, when the face of the subject 110 faces too much toward the camera 140 - 1 on one side, for example, the electronic device 120 can also use the corresponding visual effect in the presentation device 130 to remind the subject 110 to adjust the posture.
  • the electronic device 120 may present an avatar in the display area of the presentation device 130, and remind the subject 110 that the face orientation should be adjusted in a certain direction through the animation of the avatar.
  • the electronic device 120 may, for example, also provide a voice reminder while presenting the animation, so that the object 110 can be more clear about how to adjust the posture or position.
  • the electronic device 120 can also remind the object 110 to maintain its position or posture. For example, the electronic device 120 may remind the subject to maintain a position or posture for image capture through voice. Alternatively or additionally, the electronic device 120 may also set a timer to start capturing images after a predetermined time when it is determined that the image capturing conditions are satisfied. For example, the electronic device 120 may remind the user to keep the gesture through voice information, and inform the subject 110 that the image acquisition will start in 3 seconds.
  • the electronic device 120 may further adjust the focus of one or more cameras 140 according to the characteristics of the object 110 .
  • the electronic device 120 may dynamically adjust the focal length of the camera 140 according to the curvature of the face of the subject 110 to make the captured image clearer. It should be understood that any suitable dynamic focusing technology in the art can be used to realize the adjustment of the focal length, and the specific details will not be described in detail here.
  • the orientation of one or more of cameras 140 may be arranged differently.
  • FIG. 5 shows a schematic diagram 500 of an arrangement of cameras according to some embodiments of the present disclosure.
  • the camera 140 - 1 is set to have an angle 510 with the surface of the electronic device 120 to enable better capture of the right face image of the subject 110
  • the camera 140 - 2 is set to the surface of the electronic device 120 Has an angle of 520 to enable better capture of the left face image of subject 110 .
  • angle 510 may be the same as angle 520 when cameras 140-1 and 140-2 are symmetrically arranged relative to electronic device 120.
  • angle 510 and angle 520 may be any angle within the range of 10° and 30°.
  • the preset angle may not be suitable for some objects, considering that the radian of the face of different objects may vary greatly.
  • the camera 140 may also dynamically adjust the shooting angle relative to the electronic device 120 .
  • the electronic device 120 may adjust the shooting angle of one or more cameras 140 according to the characteristics of the object 110 .
  • the electronic device 120 may, for example, adjust the shooting angle 510 of the camera 140 - 1 according to the curvature of the right face of the subject 110 , and adjust the shooting angle 510 of the camera 140 - 3 according to the curvature of the left face of the subject 110 .
  • the shooting angle is 520 to make the captured image clearer. It should be understood that any suitable driving structure may be used to adjust the angle of the camera, which is not intended to be limited by the present disclosure.
  • FIGS. 6A, 6B, and 6C show different images of objects captured by different cameras, respectively.
  • Image 610 (referred to as the first image for convenience of description) may be an image of the left cheek of subject 110 captured by camera 140-1
  • image 630 (referred to as second image for convenience of description) may be captured by camera 140-3.
  • the captured image of the right cheek, image 620 (referred to as a third image for convenience of description) may be a frontal face image captured by the camera 140-2.
  • the electronic device 120 may perform appearance analysis using the acquired multiple images. Specifically, the electronic device 120 may first determine the corresponding interest area in the multiple images according to the type of appearance evaluation to be provided. In some implementations, the electronic device 120 may pre-establish an association relationship between the type of appearance evaluation, the location of the camera, and the corresponding area of interest. Exemplarily, the electronic device 120 may pre-store a map of "Facial Pore State Evaluation", "Camera 140-1" and the corresponding region of interest.
  • a region of interest can be represented as a region enclosed by multiple feature points.
  • the electronic device 120 may indicate the corresponding region of interest by storing the descriptions of these feature points.
  • Such a feature point description can enable the electronic device 120 to identify an image location corresponding to the feature point from the corresponding image.
  • the electronic device 120 can determine the corresponding “facial pore state” and the camera 140-1 according to the pre-stored map. A plurality of feature points are described, and the corresponding feature points 605 are detected from the collected images by using the feature point recognition technology. Subsequently, the electronic device 120 may determine a corresponding region of interest 615 (referred to as a first region of interest for convenience of description) from the first image 610 according to the detected plurality of feature points 605 .
  • a corresponding region of interest 615 referred to as a first region of interest for convenience of description
  • the electronic device 120 may determine the corresponding region of interest 635 (for convenience of description, referred to as the second region of interest) from the second image 630 , and determine the corresponding region of interest 625 from the third image 620 (for convenience of convenience) description, called the third region of interest).
  • the first region of interest 615 , the second region of interest 635 and the third region of interest 625 respectively represent a set of appearance features (in this example, pore features) corresponding to different parts of the object.
  • Appearance features discussed herein may include, but are not limited to: skin pigmentation characteristics, wrinkle characteristics, red zone characteristics, pigmentation characteristics, acne characteristics, dark circle characteristics, blackhead characteristics, and the like.
  • skin pigmentation characteristics wrinkle characteristics, red zone characteristics, pigmentation characteristics, acne characteristics, dark circle characteristics, blackhead characteristics, and the like.
  • the corresponding region of interest of the blackhead mainly includes the nose of the subject.
  • the electronic device 120 may determine a final appearance evaluation based on the first area of interest 615 , the second area of interest 635 , and the third area of interest 625 .
  • the electronic device 120 may perform corresponding image analysis on the first region of interest 615, the second region of interest 635, and the third region of interest 625, respectively, to obtain regional appearance evaluation results corresponding to different regions of interest.
  • the electronic device 120 may then provide an overall appearance rating by fusing the multiple regional appearance rating combinations.
  • the computing device 120 since images collected by different cameras may overlap, when merging the appearance evaluation of multiple regions, the computing device 120 also needs to consider whether multiple regions of interest overlap, so as to avoid repeated calculation and result in inaccurate results.
  • the correspondence between different interest regions may be pre-stored in the electronic device 120 .
  • correspondences may be pre-stored to indicate that region 640-1 in the first region of interest 615 is a non-overlapping region, and region 640-2 is a non-overlapping region with region 645-2 in the third region of interest 625 1 overlapping area; area 650-2 in second area of interest 635 is a non-overlapping area, area 650-1 is an area that overlaps area 645-3 in third area of interest 625; area 645 in third area of interest 625 -2 is the non-overlapping area.
  • the region correspondences may be maintained by storing feature points of overlapping regions in different interest regions.
  • the electronic device 120 may utilize different strategies to handle overlapping and non-overlapping regions. For example, when using the number of facial pores to represent the "facial pore state", the electronic device 120 may first determine the area 640-1, area 640-2, area 650-1, area 650-2, area 645-1 through image processing 1. Number of pores in area 645-2 and area 645-3.
  • the electronic device 120 may, for example, determine the number of pores in the region 640-2 determined based on the first image 610 and the pores in the region 645-1 determined based on the third image 620
  • the average value of the number (for convenience of description, referred to as the first average value), and determine the number of pores in the region 650-1 determined based on the second image 630 and the number of pores in the region 645-3 determined based on the third image 620 (for convenience of description, referred to as the first average value), and the total number of pores of the object 110 is determined as the number of pores determined based on the region 640-1, the first average value, and the number of pores determined based on the region 645-2.
  • the total number of pores can be expressed as:
  • R1 represents the number of pores in area 610-1
  • L1 represents the number of pores in area 650-2
  • F1 represents the number of pores in area 645-2
  • R0 represents the number of pores in area 640-
  • FR represents the number of pores in area 645- Number of pores in 1
  • L0 is the number of pores in area 650-1
  • FL is the number of pores in area 645-3.
  • the "number of facial pores” may be a piece of data used to determine the "state of facial pores” evaluation.
  • the electronic device 120 can also use a similar method to determine other data such as "pore area ratio”, “pore size” and “pore color depth”, and based on these data to determine the final "facial pore state”.
  • the determination of some appearance evaluations may not involve overlapping regions.
  • the final crow's feet evaluation may be directly determined based on the evaluation results corresponding to the left-eye image and the right-eye image respectively collected by different cameras.
  • Examples of skin evaluations to which the present disclosure is applicable may also include: skin pigmentation evaluations, wrinkle evaluations, red zone evaluations, pigmentation evaluations, pimple evaluations, dark circle evaluations, blackhead evaluations, other skin evaluations that can be determined using image analysis, or some of the above any combination of items. It should be understood that these evaluations may be determined using the manner discussed above and will not be described in detail here.
  • the electronic device 120 may also use the methods discussed above to determine appearance scores (eg, face value scores) corresponding to different regions, and use a fusion method based on overlapping regions to determine the final appearance score, where This is no longer described in detail.
  • appearance scores eg, face value scores
  • the process of generating the appearance evaluation has been described above by taking the images captured by three cameras as an example, the present disclosure can be based on a similar manner for any number of multiple images (eg, 2 images or more than 3 images) images) to perform the analysis and fuse the appearance ratings determined based on each image to determine the subject's overall appearance rating.
  • electronic device 120 may utilize presentation device 130 to provide appearance evaluation 160 .
  • electronic device 120 may utilize presentation device 130 to present appearance rating 160 through graphical interface 150 .
  • the electronic device 120 may also provide appearance evaluations through other media forms.
  • electronic device 120 may send the appearance evaluation to subject 110's mailbox via email, text message, or other means of communication.
  • the electronic device 120 may also generate a three-dimensional model of the object 110 according to multiple images captured by the camera, and use the presentation device 130 to simultaneously display the three-dimensional model and appearance evaluation.
  • FIG. 7 illustrates an example graphical interface 700 in accordance with some embodiments of the present disclosure. As shown in FIG. 7 , the graphical interface 700 includes a three-dimensional model 710 , and a plurality of appearance evaluations 720 - 1 , 720 - 2 and 730 . It should be understood that the three-dimensional model 710 shown in FIG. 7 is only schematic, and the presented three-dimensional model 710 can accurately present skin images of different regions of the subject.
  • the user can also control the display of the graphical interface 700 by performing certain operations, for example.
  • the user can change the presentation angle of the three-dimensional model 710 by using a swipe or drag operation on the touch screen, so as to facilitate the user to view the skin state of different regions.
  • the electronic device 120 may also Corresponding appearance evaluations are presented at different locations of the three-dimensional model 710 .
  • the electronic device 120 may present the appearance evaluation 720-1 "Crow's Feet Score: 75" corresponding to "Crow's Feet Evaluation” at the tail of the eye of the three-dimensional model, and present the appearance evaluation 720-1 corresponding to the "Natal fold evaluation” near the nasolabial fold 2 "Nasal folds score: 85 distribution".
  • the electronic device 120 may further map the appearance evaluation as a texture at a corresponding position of the three-dimensional model 710 , so that the object 110 can view the appearance evaluation more intuitively.
  • FIG. 8 shows a flowchart of an example appearance analysis process 800 in accordance with an embodiment of the present disclosure.
  • Process 800 may be implemented by any of the electronic devices described with reference to FIGS. 1-4 , for example.
  • the electronic device acquires a first image associated with a first region of the object, the first image being captured by a first camera.
  • the electronic device acquires a second image associated with a second region of the object, the second image being captured by a second camera, wherein the first region is different from the second region.
  • the electronic device provides an appearance evaluation of the subject, wherein the appearance evaluation is determined based on the first image and the second image.
  • Process 900 shows a flowchart of yet another example skin analysis process 900 in accordance with embodiments of the present disclosure.
  • Process 900 may be implemented by any of the electronic devices described with reference to FIGS. 1-4 , for example.
  • the electronic device may prompt the user to adjust the posture or position.
  • the electronic device may send a photographing instruction to multiple cameras.
  • the first camera, the third camera, and the second camera perform photographing actions, respectively.
  • the electronic device may acquire left-face images, front-face images, and right-face images captured by the first camera, the third camera, and the second camera, respectively.
  • the electronic device may perform feature point detection and ROI extraction on the left face image, front face image, and right face image, respectively.
  • the electronic device may perform skin detection on the extracted left face ROI, front face ROI, and right face ROI, respectively, to obtain left face skin detection results, front face skin detection results, and right face skin detection results .
  • the electronic device may fuse the left face skin detection result, the front face skin detection result, and the right face skin detection result using the result fusion method described above with reference to determine the user's final skin detection result.
  • the electronic device may present the skin detection results through the screen.
  • the electronic device may also output the skin evaluation by means of text messages, emails, and printing of written reports.
  • FIG. 10 illustrates an example exterior analysis device 1000 according to one embodiment of the present disclosure.
  • the example appearance analysis apparatus 1000 may be implemented as one or more software engines, hardware components or combinations thereof, etc., configured with logic for implementing the functions of the corresponding modules.
  • the appearance analysis apparatus 1000 may include a first image acquisition unit 1010 , a second image acquisition unit 1020 , and an evaluation providing unit 1030 .
  • the first image acquisition unit 1010 is configured to acquire a first image associated with the first area of the object, wherein the first image is acquired by the first camera.
  • the second image acquisition unit 1020 is configured to acquire a second image associated with a second area of the object, the second image being acquired by the second camera, wherein the first area is different from the second area.
  • the evaluation providing unit 1030 is configured to provide an appearance evaluation of the subject, wherein the appearance evaluation is determined based on the first image and the second image.
  • the appearance analysis apparatus 1000 further includes: an object prompting unit, configured to: if the position or posture of the object does not meet the image capturing conditions of the first camera or the second camera, the electronic device prompts the object to adjust the position or posture.
  • an object prompting unit configured to: if the position or posture of the object does not meet the image capturing conditions of the first camera or the second camera, the electronic device prompts the object to adjust the position or posture.
  • the appearance analysis apparatus 1000 further includes: a camera adjustment unit, configured to: based on the characteristics of the object, the electronic device adjusts the acquisition parameters of at least one of the first camera and the second camera, the acquisition parameters include a shooting angle and At least one of the focal lengths.
  • a camera adjustment unit configured to: based on the characteristics of the object, the electronic device adjusts the acquisition parameters of at least one of the first camera and the second camera, the acquisition parameters include a shooting angle and At least one of the focal lengths.
  • the first camera and the second camera are arranged symmetrically on opposite sides of the image capture device.
  • the electronic device further includes a third camera
  • the appearance analysis apparatus 1000 further includes: a third image acquisition unit configured to acquire a third image associated with the third area of the object, the third image being obtained by the third Captured by a camera, wherein the third camera is set on the image capturing device and is at the same distance from the first camera and the second camera.
  • the appearance analysis apparatus 1000 further includes an evaluation determination unit configured to: determine a first region of interest from the first image, the first region of interest representing a first set of appearance features of the object; determine from the second image A second region of interest, the second region of interest representing a second set of appearance features of the object; and determining an appearance evaluation for the appearance feature of the object based on at least the first region of interest and the second region of interest.
  • the evaluation determination unit is further configured to: if the first region of interest and the second region of interest include overlapping regions: determine a first appearance evaluation corresponding to the overlapping region based on the first region of interest; based on the second region of interest, determining a second appearance evaluation corresponding to the overlapping region; and determining an appearance evaluation for the appearance feature of the subject based on the first appearance evaluation and the second appearance evaluation.
  • the evaluation providing unit 1030 is further configured to: present a three-dimensional model of the object, wherein the three-dimensional model is generated based on at least the first image and the second image; and present the appearance evaluation at different locations of the three-dimensional model. corresponding content.
  • the appearance evaluation includes at least one of a skin evaluation and an appearance score.
  • FIG. 11 is a schematic structural diagram of an electronic device 1100 provided by an embodiment of the present application.
  • the electronic device 1100 may be any of the electronic devices discussed above with reference to FIGS. 1-4 .
  • the electronic device 1100 may include a processor 1110, an external memory interface 1120, an internal memory 1121, a universal serial bus (USB) interface 1130, a charge management module 1140, a power management module 1141, a battery 1142, an antenna 1, an antenna 2 , mobile communication module 1150, wireless communication module 1160, audio module 1170, speaker 1170A, receiver 1170B, microphone 1170C, headphone jack 1170D, sensor module 1180, buttons 1190, motor 1191, indicator 1192, camera 1193, display screen 1194, and Subscriber identification module (subscriber identification module, SIM) card interface 1195 and so on.
  • SIM Subscriber identification module
  • the sensor module 1180 may include a pressure sensor 1180A, a gyroscope sensor 1180B, an air pressure sensor 1180C, a magnetic sensor 1180D, an acceleration sensor 1180E, a distance sensor 1180F, a proximity light sensor 1180G, a fingerprint sensor 1180H, a temperature sensor 1180J, a touch sensor 1180K, and ambient light.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 1100 .
  • the electronic device 1100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 1110 may include one or more processing units, for example, the processor 1110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 1100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 1110 for storing instructions and data.
  • the memory in the processor 1110 is a cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 1110 . If the processor 1110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 1110 is reduced, thereby improving the efficiency of the system.
  • the processor 1110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 1110 may contain multiple sets of I2C buses.
  • the processor 1110 can be respectively coupled to the touch sensor 1180K, the charger, the flash, the camera 1193, etc. through different I2C bus interfaces.
  • the processor 1110 can couple the touch sensor 1180K through the I2C interface, so that the processor 1110 and the touch sensor 1180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 1100 .
  • the I2S interface can be used for audio communication.
  • the processor 1110 may contain multiple sets of I2S buses.
  • the processor 1110 may be coupled with the audio module 1170 through an I2S bus to implement communication between the processor 1110 and the audio module 1170 .
  • the audio module 1170 can transmit audio signals to the wireless communication module 1160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 1170 and the wireless communication module 1160 may be coupled through a PCM bus interface.
  • the audio module 1170 can also transmit audio signals to the wireless communication module 1160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 1110 with the wireless communication module 1160 .
  • the processor 1110 communicates with the Bluetooth module in the wireless communication module 1160 through the UART interface to implement the Bluetooth function.
  • the audio module 1170 can transmit audio signals to the wireless communication module 1160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 1110 with the display screen 1194, the camera 1193 and other peripheral devices.
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 1110 communicates with the camera 1193 through the CSI interface, so as to realize the photographing function of the electronic device 1100 .
  • the processor 1110 communicates with the display screen 1194 through the DSI interface to implement the display function of the electronic device 1100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 1110 with the camera 1193, the display screen 1194, the wireless communication module 1160, the audio module 1170, the sensor module 1180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 1130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 1130 can be used to connect a charger to charge the electronic device 1100, and can also be used to transmit data between the electronic device 1100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 1100 .
  • the electronic device 1100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 1140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 1140 may receive charging input from the wired charger through the USB interface 1130 .
  • the charging management module 1140 may receive wireless charging input through the wireless charging coil of the electronic device 1100 . While the charging management module 1140 charges the battery 1142 , it can also supply power to the electronic device through the power management module 1141 .
  • the power management module 1141 is used to connect the battery 1142 , the charging management module 1140 and the processor 1110 .
  • the power management module 1141 receives input from the battery 1142 and/or the charging management module 1140, and supplies power to the processor 1110, the internal memory 1121, the external memory, the display screen 1194, the camera 1193, and the wireless communication module 1160.
  • the power management module 1141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 1141 may also be provided in the processor 1110 .
  • the power management module 1141 and the charging management module 1140 may also be provided in the same device.
  • the wireless communication function of the electronic device 1100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 1150, the wireless communication module 1160, the modem processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 1100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 1150 may provide a wireless communication solution including 2G/3G/4G/5G etc. applied on the electronic device 1100 .
  • the mobile communication module 1150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 1150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 1150 can also amplify the signal modulated by the modulation and demodulation processor, and then convert it into electromagnetic waves and radiate it out through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 1150 may be provided in the processor 1110 .
  • at least part of the functional modules of the mobile communication module 1150 may be provided in the same device as at least part of the modules of the processor 1110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 1170A, the receiver 1170B, etc.), or displays an image or video through the display screen 1194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 1110, and be provided in the same device as the mobile communication module 1150 or other functional modules.
  • the wireless communication module 1160 can provide applications on the electronic device 1100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 1160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 1160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 1110 .
  • the wireless communication module 1160 can also receive the signal to be sent from the processor 1110 , perform frequency modulation on it, amplify it, and then convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 1100 is coupled with the mobile communication module 1150, and the antenna 2 is coupled with the wireless communication module 1160, so that the electronic device 1100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the electronic device 1100 implements a display function through a GPU, a display screen 1194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, which is connected to the display screen 1194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 1110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 1194 is used to display images, videos, and the like.
  • Display screen 1194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 1100 may include 11 or N display screens 1194 , where N is a positive integer greater than 11.
  • the electronic device 1100 may implement a shooting function through an ISP, a camera 1193, a video codec, a GPU, a display screen 1194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 1193. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 1193.
  • Camera 1193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 1100 may include 1 or N cameras 1193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 1100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, and the like.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 1100 may support one or more video codecs. In this way, the electronic device 1100 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 Moving picture experts group
  • MPEG3 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 1100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 1120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 1100.
  • the external memory card communicates with the processor 1110 through the external memory interface 1120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 1121 may be used to store computer executable program code, which includes instructions.
  • the processor 1110 executes various functional applications and data processing of the electronic device 1100 by executing the instructions stored in the internal memory 1121 .
  • the internal memory 1121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 1100 and the like.
  • the internal memory 1121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 1100 may implement audio functions through an audio module 1170, a speaker 1170A, a receiver 1170B, a microphone 1170C, an earphone interface 1170D, and an application processor. Such as music playback, recording, etc.
  • the audio module 1170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 1170 may also be used to encode and decode audio signals. In some embodiments, the audio module 1170 may be provided in the processor 1110 , or some functional modules of the audio module 1170 may be provided in the processor 1110 .
  • Speaker 1170A also referred to as “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 1100 can listen to music through the speaker 1170A, or listen to a hands-free call.
  • the receiver 1170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the electronic device 1100 answers a call or a voice message, the voice can be answered by placing the receiver 1170B close to the human ear.
  • Microphone 1170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by approaching the microphone 1170C through the human mouth, and input the sound signal into the microphone 1170C.
  • the electronic device 1100 may be provided with at least one microphone 1170C. In other embodiments, the electronic device 1100 may be provided with two microphones 1170C, which may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 1100 may also be provided with three, four or more microphones 1170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 1170D is used to connect wired earphones.
  • the earphone interface 1170D can be a USB interface 1130, or can be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 1180A is used to sense pressure signals, and can convert the pressure signals into electrical signals. In some embodiments, the pressure sensor 1180A may be provided on the display screen 1194 . Pressure sensor 1180A
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material.
  • the electronic device 1100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 1100 detects the intensity of the touch operation according to the pressure sensor 1180A.
  • the electronic device 1100 may also calculate the touched position according to the detection signal of the pressure sensor 1180A. In some embodiments, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the instruction for viewing the short message is executed.
  • the instruction to create a new short message is executed.
  • the gyro sensor 1180B may be used to determine the motion attitude of the electronic device 1100 .
  • the angular velocity of electronic device 1100 about three axes ie, x, y, and z axes
  • the gyro sensor 1180B can be used for image stabilization.
  • the gyroscope sensor 1180B detects the shaking angle of the electronic device 1100, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to offset the shaking of the electronic device 1100 through reverse motion to achieve anti-shake.
  • the gyro sensor 1180B can also be used for navigation and somatosensory game scenarios.
  • Air pressure sensor 1180C is used to measure air pressure. In some embodiments, the electronic device 1100 calculates the altitude through the air pressure value measured by the air pressure sensor 1180C to assist in positioning and navigation.
  • Magnetic sensor 1180D includes a Hall sensor.
  • the electronic device 1100 can detect the opening and closing of the flip holster using the magnetic sensor 1180D.
  • the electronic device 1100 can detect the opening and closing of the flip according to the magnetic sensor 1180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 1180E can detect the magnitude of the acceleration of the electronic device 1100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 1100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 1100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 1100 can use the distance sensor 1180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 1180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 1100 emits infrared light to the outside through the light emitting diode.
  • Electronic device 1100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 1100 . When insufficient reflected light is detected, the electronic device 1100 may determine that there is no object near the electronic device 1100 .
  • the electronic device 1100 can use the proximity light sensor 1180G to detect that the user holds the electronic device 1100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 1180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 1180L is used to sense ambient light brightness.
  • the electronic device 1100 can adaptively adjust the brightness of the display screen 1194 according to the perceived ambient light brightness.
  • the ambient light sensor 1180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 1180L can also cooperate with the proximity light sensor 1180G to detect whether the electronic device 1100 is in a pocket, so as to prevent accidental touch.
  • the fingerprint sensor 1180H is used to collect fingerprints.
  • the electronic device 1100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 1180J is used to detect the temperature.
  • the electronic device 1100 uses the temperature detected by the temperature sensor 1180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 1180J exceeds a threshold, the electronic device 1100 performs a reduction in the performance of the processor located near the temperature sensor 1180J in order to reduce power consumption and implement thermal protection.
  • the electronic device 1100 when the temperature is lower than another threshold, the electronic device 1100 heats the battery 1142 to avoid abnormal shutdown of the electronic device 1100 due to low temperature.
  • the electronic device 1100 boosts the output voltage of the battery 1142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 1180K also called “touch panel”.
  • the touch sensor 1180K may be disposed on the display screen 1194, and the touch sensor 1180K and the display screen 1194 form a touch screen, also called a "touch screen”.
  • the touch sensor 1180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 1194 .
  • the touch sensor 1180K may also be disposed on the surface of the electronic device 1100 , which is different from the location where the display screen 1194 is located.
  • the bone conduction sensor 1180M can acquire vibration signals.
  • the bone conduction sensor 1180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 1180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 1180M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 1170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 1180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 1180M, and realize the function of heart rate detection.
  • the keys 1190 include a power-on key, a volume key, and the like. Keys 1190 may be mechanical keys. It can also be a touch key.
  • the electronic device 1100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 1100 .
  • Motor 1191 can generate vibrating cues.
  • the motor 1191 can be used for incoming call vibration alerts, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • touch operations on different areas of the display screen 1194 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 1192 can be an indicator light, which can be used to indicate the charging status, the change of power, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 1195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 1100 by inserting into the SIM card interface 1195 or pulling out from the SIM card interface 1195 .
  • the electronic device 1100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 1195 can support Nano SIM card, Micro SIM card, SIM card and so on.
  • the same SIM card interface 1195 can insert multiple cards at the same time.
  • the types of the plurality of cards may be the same or different.
  • the SIM card interface 1195 can also be compatible with different types of SIM cards.
  • the SIM card interface 1195 is also compatible with external memory cards.
  • the electronic device 1100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 1100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 1100 and cannot be separated from the electronic device 1100 .
  • the software system of the electronic device 1100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiments of the present invention take the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 1100 as an example.
  • FIG. 12 is a block diagram of a software structure of an electronic device 1100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 1100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files, etc.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种外表分析的方法和电子设备。该方法可以应用于包括第一摄像头和第二摄像头的电子设备,该方法包括:电子设备获取与对象的第一区域相关联的第一图像以及与对象的第二区域相关联的第二图像,其中,第一图像是由第一摄像头所采集,第二图像是由第二摄像头所采集,并且第一区域不同于第二区域。随后,电子设备提供对象的外表评价,其中外表评价是基于第一图像和第二图像所确定的。通过这样的方式,能够提高外表评价的准确性。

Description

外表分析的方法和电子设备 技术领域
本公开涉及电子技术领域,尤其涉及一种外表分析的方法和电子设备。
背景技术
随着技术的发展,一些智能终端设备能够采集对象(例如,人)的图像,并通过图像分析来提供对于对象的外表评价。例如,一些智能手机应用可以利用智能手机的摄像头来采集人脸的图像,并提供对于人脸的皮肤状态的评价。一些智能镜子例如可以通过摄像头来采集人脸的图像,并提供对于对象的颜值的评分或者年龄估计等。
然而,现有的各类智能终端设备通常都是利用正面的摄像头来采集用户的图像,这使得智能终端设备所采集的图像区域是有限的。例如,在利用智能手机的摄像来采集人脸图像时,所采集的图像通常是用户正面的人脸照片,而难以采集到左脸颊和右脸颊的特定区域。这将严重地影响外表评价的准确性。
发明内容
本公开提供了一种外表分析方法和电子设备。
第一方面,本公开的实施例提供了一种外表分析方法,该方法可以应用于包括第一摄像头和第二摄像头的电子设备。该方法包括:电子设备获取由第一摄像头采集的第一图像和由第二摄像采集的第二图像,其中第一图像是关于用户的第一区域的图像,第二图像是关于用户的不同的第二区域的图像。电子设备随后提供对象的外表评价,该外表评价是基于第一图像和第二图像所确定的。在一些实现中,该外表评价可以是有该电子设备所确定的。在另一实现中,该外表评价例如可以是由不同于该电子设备的其他设备(例如,服务器)所确定。
通过基于不同摄像头采集的多个图像来获取对象的外表评价,本公开的实施例可以提高所提供的外表评价的准确性。在一个示例实现中,本公开的实施例可以同时获取对象的多个图像,例如,对象的左脸图像、正脸图像和右脸图像,从而能够更为准确地确定对象脸部的外表评价。
在一些实现中,电子设备还可以确定对象的位姿或姿态是否符合第一摄像头或者第二摄像头的图像采集条件。如果对象的位置或姿态不符合图像采集条件,电子设备可以提示对象调整位置或姿态。示例性地,电子设备可以通过语音来提示对象调整位置以与摄像头更近或者更远。或者,电子设备还可以通过呈现设备上的动画来直观地提示对象例如向右偏转头部。
通过提示对象位置或姿态,本公开的实施例可以使得对象处于更适于图像采集的位置或姿态,从而能够获取更优的对象图像,进而能够进一步提高外表评价的准确性。
在一些实现中,电子设备还可以基于对象的特性来调整第一摄像头和第二摄像头中的至少一个摄像头的采集参数。示例性地,采集参数包括拍摄角度和焦距中的至少一项。
基于这样的方式,本公开的实施例可以针对不同的对象动态地调整摄像头的采集参数,避免固定采集参数可能不适用于特定的对象,从而能够提高方案的普适性。此外,通过调整采集参数,本公开还能够提高所获取的对象图像的质量,进而提高外表评价的准确性。
在一些实现中,第一摄像头和第二摄像头被对称地布置于图像采集设备的相对侧。例如,第一摄像头可以被布置在图像采集设备的最左侧,第二摄像头可以被布置在对应的最右侧。
通过这样的布置,第一摄像头和第二摄像头的组合能够采集到对象更为全面的图像,从而提高外表评价的准确性。由于第一摄像头和第二摄像头被对称地布置,其采集到的第一图像和第二图像也是对称的。在一些实现中,在利用图像分析引擎来分析图像时,本公开的实现还可以通过将第二图像进行水平翻转,从而能够利用同一个图像分析引擎来处理第一图像和翻转后的第二图像。例如,本公开的实现可以利用左脸分析引擎来同时处理左脸图像和经翻转后的右脸图像,从而降低了开发成本。
在一些实现中,该电子设备还包括第三摄像头,其中第三摄像头被设置在图像采集设备上,并且与第一摄像头和第二摄像头的距离相同。电子设备还可以获取由第三摄像头采集的第三图像,其中第三图像可以是关于对象的第三区域。示例性地,对于类圆形的图像采集设备,第一摄像头和第二摄像头可以被对称地设置在图像采集设备的左侧和右侧,第三摄像头可以被设置在图像采集设备的中心轴线上,例如,图像采集设备的最上方或最下方。通过这样的布置方式,本公开的实施例可以获取对象更加全面的图像,进而提高外表评价的准确度。
在一些实现中,可以由电子设备基于第一图像和第二图像来确定外表评价。具体地,电子设备可以从第一图像中确定第一兴趣区域,并从第二图像中确定第二兴趣区域。其中,第一兴趣区域表征对象的第一组外表特征,第二兴趣区域表征对象的第二组外表特征。随后,电子设备可以至少基于第一兴趣区域和第二兴趣区域来确定针对对象的外表特征的外表评价。
在一些实现中,电子设备可以通过检测图像中的多个特征点来去确定对应的兴趣区域。例如,电子设备可以预先存储外表特征与一组特征点的对应关系,并从对应的图像中检测出对应的特征点,并将这些特征点所包围的区域确定为对应的兴趣区域。示例性地,外表特征可以包括但不限于:毛孔特征、色斑特征、皱纹特征、红区特征、色斑特征、痘特征、黑眼圈特征、黑头特征或以上的任意组合。通过为不同的外表特征设置对应的兴趣区域,本公开的实现能够有效地融合不同图像的检测结果,并且还能够支持基于所获取的图像执行多种类型的外表分析。
在一些实现中,在确定外表评价的过程中,如果第一兴趣区域和第二兴趣区域包括重叠区域,则电子设备可以基于第一兴趣区域确定与重叠区域对应的第一外表评价,并基于第二兴趣区域,确定与重叠区域对应的第二外表评价。随后,电子设备可以基于第一外表评价和第二外表评价来确定对象的针对外表特征的外表评价。
示例性地,对于重叠区域,电子设备例如可以通过计算第一外表评价和第二外表评价的平均值,并将平均值来作为针对外表特征的外表评价。例如,第一外表评价可以是基于第一图像确定的重叠区域的毛孔数目,第二外表评价可以是基于第二图像确定的重叠区域的毛孔数目。电子设备可以将两个毛孔数目的均值确定为该重叠区域的毛孔数目。
通过这样的方式,本公开的实现可以基于不同角度采集的多个图像来确定针对重叠区域的外表评价,从而避免了由于图像采集不全面所带来的外表评价结果不够准确的问题。
在一些实现中,电子设备可以至少呈现对象的三维模型。示例性地,三维模型可以是由电子设备基于至少第一图像和第二图像所生成的。随后,电子设备可以在三维模型的不同位置处呈现外表评价中的对应内容。
通过呈现三维模型而不是简单的二维图像,本公开的实现可以更为直观地呈现对象的外 表评价。在一些实现中,三维模型的呈现角度还可以响应于用户的操作而发生变化,进而方便用户更为方便地查看特定区域的外表评价。
在一些实现中,外表评价包括皮肤评价和外貌评分中的至少一项。本公开适用的皮肤评价的示例还可以包括:毛孔评价、色斑评价、皱纹评价、红区评价、色斑评价、痘评价、黑眼圈评价、黑头评价、其他可以利用图像分析确定的皮肤评价或者以上中任意项的组合。
第二方面,本公开的实施例提供了一种终端设备。该终端设备包括至少一个计算单元;至少一个存储器,所述至少一个存储器被耦合到所述至少一个计算单元并且存储用于由所述至少一个计算单元执行的指令,所述指令当由所述至少一个计算单元执行时,使得:终端设备获取由第一摄像头采集的第一图像和由第二摄像采集的第二图像,其中第一图像是关于用户的第一区域的图像,第二图像是关于用户的不同的第二区域的图像。终端设备随后提供对象的外表评价,该外表评价是基于第一图像和第二图像所确定的。在一些实现中,该外表评价可以是有该终端设备所确定的。在另一实现中,该外表评价例如可以是由不同于该终端设备的其他设备(例如,服务器)所确定。
终端设备可以是具有计算能力的智能终端,其示例包括但不限于:台式机、笔记本电脑、平板电脑、智能手机、智能手表、智能眼镜或电子书等。在一些实现中,第一摄像头和第二摄像头可以是终端设备内置的前置摄像头或者后置摄像头。备选地,第一摄像头和第二摄像头中的一个可以是内置摄像头,而另一个可以是与终端设备通信连接的外部摄像头。或者,第一摄像头和第二摄像头可以都是与终端设备通信连接的外部摄像头。
通过基于不同摄像头采集的多个图像来获取对象的外表评价,本公开的实施例可以提高所提供的外表评价的准确性。在一个示例实现中,本公开的实施例可以同时获取对象的多个图像,例如,对象的左脸图像、正脸图像和右脸图像,从而能够更为准确地确定对象脸部的外表评价。
在一些实现中,终端设备还可以确定对象的位姿或姿态是否符合第一摄像头或者第二摄像头的图像采集条件。如果对象的位置或姿态不符合图像采集条件,终端设备可以提示对象调整位置或姿态。示例性地,终端设备可以通过语音来提示对象调整位置以与摄像头更近或者更远。或者,终端设备还可以通过呈现设备上的动画来直观地提示对象例如向右偏转头部。
通过提示对象位置或姿态,本公开的实施例可以使得对象处于更适于图像采集的位置或姿态,从而能够获取更优的对象图像,进而能够进一步提高外表评价的准确性。
在一些实现中,终端设备还可以基于对象的特性来调整第一摄像头和第二摄像头中的至少一个摄像头的采集参数。示例性地,采集参数包括拍摄角度和焦距中的至少一项。
基于这样的方式,本公开的实施例可以针对不同的对象动态地调整摄像头的采集参数,避免固定采集参数可能不适用于特定的对象,从而能够提高方案的普适性。此外,通过调整采集参数,本公开还能够提高所获取的对象图像的质量,进而提高外表评价的准确性。
在一些实现中,终端设备还可以获取由第三摄像头采集的第三图像,其中第三图像可以是关于对象的第三区域。第三摄像头可以是终端设备的内置摄像头或者是与终端设备通信连接的外部摄像头,并且第三摄像头可以布置为与第一摄像头和第二摄像头的距离相同。通过这样的布置方式,本公开的实施例可以获取对象更加全面的图像,进而提高外表评价的准确度。
在一些实现中,可以由终端设备基于第一图像和第二图像来确定外表评价。具体地,终端设备可以从第一图像中确定第一兴趣区域,并从第二图像中确定第二兴趣区域。其中,第 一兴趣区域表征对象的第一组外表特征,第二兴趣区域表征对象的第二组外表特征。随后,终端设备可以至少基于第一兴趣区域和第二兴趣区域来确定针对对象的外表特征的外表评价。
在一些实现中,终端设备可以通过检测图像中的多个特征点来去确定对应的兴趣区域。例如,终端设备可以预先存储外表特征与一组特征点的对应关系,并从对应的图像中检测出对应的特征点,并将这些特征点所包围的区域确定为对应的兴趣区域。示例性地,外表特征可以包括但不限于:毛孔特征、色斑特征、皱纹特征、红区特征、色斑特征、痘特征、黑眼圈特征、黑头特征或以上的任意组合。通过为不同的外表特征设置对应的兴趣区域,本公开的实现能够有效地融合不同图像的检测结果,并且还能够支持基于所获取的图像执行多种类型的外表分析。
在一些实现中,在确定外表评价的过程中,如果第一兴趣区域和第二兴趣区域包括重叠区域,则终端设备可以基于第一兴趣区域确定与重叠区域对应的第一外表评价,并基于第二兴趣区域,确定与重叠区域对应的第二外表评价。随后,终端设备可以基于第一外表评价和第二外表评价来确定对象的针对外表特征的外表评价。
示例性地,对于重叠区域,终端设备例如可以通过计算第一外表评价和第二外表评价的平均值,并将平均值来作为针对外表特征的外表评价。例如,第一外表评价可以是基于第一图像确定的重叠区域的毛孔数目,第二外表评价可以是基于第二图像确定的重叠区域的毛孔数目。终端设备可以将两个毛孔数目的均值确定为该重叠区域的毛孔数目。
通过这样的方式,本公开的实现可以基于不同角度采集的多个图像来确定针对重叠区域的外表评价,从而避免了由于图像采集不全面所带来的外表评价结果不够准确的问题。
在一些实现中,终端设备可以至少呈现对象的三维模型。示例性地,三维模型可以是由终端设备基于至少第一图像和第二图像所生成的。随后,终端设备可以在三维模型的不同位置处呈现外表评价中的对应内容。
通过呈现三维模型而不是简单的二维图像,本公开的实现可以更为直观地呈现对象的外表评价。在一些实现中,三维模型的呈现角度还可以响应于用户的操作而发生变化,进而方便用户更为方便地查看特定区域的外表评价。
在一些实现中,外表评价包括皮肤评价和外貌评分中的至少一项。本公开适用的皮肤评价的示例还可以包括:毛孔评价、色斑评价、皱纹评价、红区评价、色斑评价、痘评价、黑眼圈评价、黑头评价、其他可以利用图像分析确定的皮肤评价或者以上中任意项的组合。
第三方面,本公开的实现提供了一种图像采集设备。该图像采集设备包括第一摄像头、第二摄像头和通信部件。第一摄像头被配置为采集与对象的第一区域相关联的第一图像,第二摄像头被配置为采集与对象的第二区域相关联的第二图像,其中,第一区域不同于第二区域,并且通信部件被配置为向终端设备提供第一图像和第二图像,以用于确定对象的外表评价。
通过这样的布置,本公开所提供的图像采集设备能够更加全面地采集对象的图像,进而能够提高所确定的外表评价的准确性。
在一些实现中,第一摄像头和第二摄像头被对称地布置于图像采集设备的相对侧。例如,第一摄像头可以被布置在图像采集设备的最左侧,第二摄像头可以被布置在对应的最右侧。
通过这样的布置,第一摄像头和第二摄像头的组合能够采集到对象更为全面的图像,从而提高外表评价的准确性。由于第一摄像头和第二摄像头被对称地布置,其采集到的第一图 像和第二图像也是对称的。在一些实现中,在利用图像分析引擎来分析图像时,本公开的实现还可以通过将第二图像进行水平翻转,从而能够利用同一个图像分析引擎来处理第一图像和翻转后的第二图像。例如,本公开的实现可以利用左脸分析引擎来同时处理左脸图像和经翻转后的右脸图像,从而降低了开发成本。
在一些实现中,图像采集设备还包括第三摄像头,其中第三摄像头被设置为与第一摄像头和第二摄像头的距离相同。图像采集设备还可以通过通信部件来提供由第三摄像头采集的第三图像。示例性地,对于类圆形的图像采集设备,第一摄像头和第二摄像头可以被对称地设置在图像采集设备的左侧和右侧,第三摄像头可以被设置在图像采集设备的中心轴线上,例如,图像采集设备的最上方或最下方。通过这样的布置方式,本公开的实施例可以获取对象更加全面的图像,进而提高外表评价的准确度。
在一些实现中,外表评价包括皮肤评价和外貌评分中的至少一项。本公开适用的皮肤评价的示例还可以包括:毛孔评价、色斑评价、皱纹评价、红区评价、色斑评价、痘评价、黑眼圈评价、黑头评价、其他可以利用图像分析确定的皮肤评价或者以上中任意项的组合。
第四方面,本公开的实施例提供了一种外表分析系统,包括:根据第二方面的终端设备和根据第三方面的图像采集设备。
第五方面,提供了一种外表分析装置,包括:外表分析装置可以包括第一图像获取单元、第二图像获取单元和评价提供单元。具体地,第一图像获取单元被配置为获取与对象的第一区域相关联的第一图像,其中第一图像是由第一摄像头采集的。第二图像获取单元被配置为获取与对象的第二区域相关联的第二图像,第二图像是由第二摄像头采集的,其中,第一区域不同于第二区域。评价提供单元被配置为提供对象的外表评价,其中外表评价是基于第一图像和第二图像所确定的。
在一些实现中,外表分析装置还包括:对象提示单元,被配置为:如果对象的位置或姿态不符合第一摄像头或者第二摄像头的图像采集条件,电子设备提示对象调整位置或姿态。
在一些实现中,外表分析装置还包括:摄像头调整单元,被配置为:基于对象的特性,电子设备调整第一摄像头和第二摄像头中的至少一个摄像头的采集参数,采集参数包括拍摄角度和焦距中的至少一项。
在一些实现中,第一摄像头和第二摄像头被对称地布置于图像采集设备的相对侧。
在一些实现中,电子设备还包括第三摄像头,外表分析装置还包括:第三图像获取单元,被配置为获取与对象的第三区域相关联的第三图像,第三图像是由第三摄像头采集的,其中第三摄像头被设置在图像采集设备上,并且与第一摄像头和第二摄像头的距离相同。
在一些实现中,外表分析装置还包括评价确定单元,其被配置为:从第一图像中确定第一兴趣区域,第一兴趣区域表征对象的第一组外表特征;从第二图像中确定第二兴趣区域,第二兴趣区域表征对象的第二组外表特征;以及至少基于第一兴趣区域和第二兴趣区域,确定针对对象的外表特征的外表评价。
在一些实现中,评价确定单元还被配置为:如果第一兴趣区域和第二兴趣区域包括重叠区域:基于第一兴趣区域,确定与重叠区域对应的第一外表评价;基于第二兴趣区域,确定与重叠区域对应的第二外表评价;以及基于第一外表评价和第二外表评价,确定对象的针对外表特征的外表评价。
在一些实现中,评价提供单元还被配置为:呈现对象的三维模型,其中三维模型是基于至少第一图像和第二图像所生成的;以及在三维模型的不同位置处呈现外表评价中的对应内 容。
在一些实现中,外表评价包括皮肤评价和外貌评分中的至少一项。本公开适用的皮肤评价的示例还可以包括:毛孔评价、色斑评价、皱纹评价、红区评价、色斑评价、痘评价、黑眼圈评价、黑头评价、其他可以利用图像分析确定的皮肤评价或者以上中任意项的组合。
第六方面,提供了一种计算机可读存储介质,其上存储有一条或多条计算机指令,其中一条或多条计算机指令被处理器执行实现第一方面或者第一方面中的任意一种实现方式中的方法。
第七方面,提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行第一方面或者第一方面中的任意一种实现方式中的方法的部分或全部步骤的指令。
可以理解地,上述提供的第五方面的外表分析装置、第六方面所述的计算机存储介质或者第七方面所述的计算机程序产品均用于执行第一方面所提供的方法。因此,关于第一方面的解释或者说明同样适用于第五方面、第六方面和第七方面。此外,第五方面、第六方面和第七方面所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
在第八方面,提供了一种智能镜子,包括:第一摄像头;第二摄像头,第一摄像头和第二摄像头被对称地布置于智能镜子的相对侧;第三摄像头,第三摄像头与第一摄像头和第二摄像头的距离相同;至少一个计算单元;至少一个存储器,至少一个存储器被耦合到至少一个计算单元并且存储用于由至少一个计算单元执行的指令,指令当由至少一个计算单元执行时,使得智能镜子执行动作,动作包括:获取与对象的第一区域相关联的第一图像,第一图像是由第一摄像头采集的;获取与对象的第二区域相关联的第二图像,第二图像是由第二摄像头采集的,其中,第一区域不同于第二区域;获取与对象的第三区域相关联的第三图像,第三图像是由第三摄像头采集的;基于第一图像、第二图像和第三图像确定对象的外表评价;以及提供对象的外表评价,其中基于第一图像、第二图像和第三图像确定对象的外表评价包括:从第一图像中确定第一兴趣区域,第一兴趣区域表征对象的第一组外表特征;从第二图像中确定第二兴趣区域,第二兴趣区域表征对象的第二组外表特征;从第三图像中确定第三兴趣区域,第三兴趣区域表征对象的第三组外表特征;以及至少基于第一兴趣区域、第二兴趣区域和第三兴趣区域,确定针对对象的外表特征的外表评价。
通过获取第一摄像头、第二摄像头和第三摄像头分别采集的图像,本公开所提供的智能镜子能够更为准确地确定对象的外表评价,并且能够通过智能镜子直观地向用户提供所确定的外表评价。
附图说明
下面对本申请实施例用到的附图进行介绍。
图1A示出了本公开的多个实施例能够在其中实现的示例环境的示意图;
图1B示出了根据本公开的一个实施例的示例电子设备;
图2示出了根据本公开的一些实施例的示例电子设备的示意图;
图3A至图3C示出了根据本公开的另一些实施例的示例电子设备的示意图;
图4A至图4D示出了根据本公开的又一些实施例的示例电子设备的示意图;
图5示出了根据本公开的一些实施例的摄像头的布置的示意图;
图6A至图6F示出了根据本公开的一些实施例的确定外表评价的过程的示意图;
图7示出了根据本公开的一些实施例的示例图形界面;
图8示出了根据本公开的一些实施例的评价对象外表的示例过程的流程图;
图9示出了根据本公开的一些实施例的评价人脸皮肤的示例过程的流程图;
图10是本申请实施例的外表分析装置的示意性框图;
图11是本申请实施例提供的电子设备的结构示意图;以及
图12是本申请实施例的电子设备的软件结构框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
在本公开的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。
如上文所讨论的,随着技术的发展,一些智能终端设备能够采集对象(例如,人)的图像,并通过图像分析来提供对于对象的外表的评价。例如,一些智能手机应用可以利用智能手机的摄像头来采集人脸的图像,并提供对于人脸的皮肤状态的评价。一些智能镜子例如可以通过摄像头来采集人脸的图像,并提供对于对象的颜值的评分或者年龄估计等。
然而,现有的各类智能终端设备通常都是利用正面的摄像头来采集用户的图像,这使得智能终端设备所采集的图像区域是有限的。例如,在利用智能手机的摄像来采集人脸图像时,所采集的图像通常是用户正面的人脸照片,而难以采集到左脸颊和右脸颊的特定区域。这将严重地影响外表评价的准确性。
为了至少解决外表评价准确性较低这一问题,根据本公开的各种实施例,提供了一种外表评价的方案。在本公开的实施例中,电子设备获取与对象的第一区域相关联的第一图像以及与对象的第二区域相关联的第二图像,其中,第一图像是由第一摄像头所采集,第二图像是由第二摄像头所采集,并且第一区域不同于第二区域。随后,电子设备提供对象的外表评价,其中外表评价是基于第一图像和第二图像所确定的。基于这样的方式,本公开的实施例能够在不给用户带来额外负担的情况下采集到对象更加全面的图像,从而能够提供更为准确的外表评价。
以下将结合附图来描述本公开的具体方案。
示例环境
图1A示出了本公开的多个实施例能够在其中实现的示例环境100的示意图。环境100可以包括电子设备120。在图1A的示例中,电子设备120包括多个摄像头140-1、140-2和140-3(单独或统一称为摄像头140)以及呈现装置130。电子设备120能够通过摄像头140来获取对象110的图像,并提供关于对象110的外表评价。
示例性地,在图1A中,电子设备120可以被实现为一种镜子形式的设备,例如电子设备120可以是一种智能镜子。多个摄像头140可以被布置在电子设备120外围的不同位置上, 并且多个摄像头140之间被布置以具有至少预定的距离。通过这样的布置,摄像头140能够采集的对象110的区域将与其他摄像头所采集的对象110的区域不完全相同。
如图1A所示,电子设备120例如被实现为接近圆形的设备。应当理解,图1A中所示的电子设备120的形状只是示意性的,还可以被实现为其他任何适当的形状,例如方形、三角形、圆角矩形和椭圆形等等。在一些实现中,多个摄像头140例如可以被嵌入在电子设备120的外壳中。应当理解,多个摄像头120还可以通过其他任何适当的形式而被集成到电子设备120中,本公开不旨在对于摄像头140的集成方式进行限定。
在一些示例实现中,电子设备120可以包括更少的摄像头,例如仅包括被设置在电子设备120左侧和右侧的摄像头。备选地,电子设备120还可以包括更多的摄像头,以更为全面地采集用户110的图像。
在一些示例实现中,多个摄像头140中的至少两个摄像头可以相对于电子设备120而被对称地布置电子设备120的相对侧。例如,在图1B的示例中,摄像头140-1例如可以被布置在电子设备120的最左侧,摄像头140-3被对称地布置在电子设备120的最右侧。此外,为了更为全面地采集对象110的图像,摄像头140-2可以被布置以与摄像头140-1和摄像头140-2的距离相等。例如,摄像头140-2可以布置在电子设备120的最上方,或者摄像头140-2也可以被布置在电子设备120的最下方。
在一些实现中,电子设备120还可以包括呈现设备130。呈现设备130例如可以被实现为适当形式的电子显示屏,并用以呈现图形界面150。在图形界面150中,示例性地,电子设备120可以通过视觉方式来提供外表评价160,例如“皮肤得分:80分”。附加地,图形界面150还可以呈现对象110的视觉形象。关于图形界面150的各种不同呈现方式将在下文详细讨论,在此暂不详叙。
在另一些实现中,如图1B所示,呈现设备130可以包括镜面区域170以及电子显示区域180。电子显示区域180可以通过适当的方式被集成到呈现设备130中。在一些实现中,电子显示区域180可以与镜面区域170分离地设计,以使得镜面区域170通过镜面反射来呈现对象110的图像,并且电子显示区域180可以同时显示对应的数字内容,例如,外表评价160。备选地,电子显示区域180也可以被设置在镜面区域170的背面,以使得电子显示区域180只有在通电显示时才会呈现对应的数字内容。此时,镜面区域170中与电子显示区域180对应的区域不会发生镜面反射或者镜面反射较弱,以使得电子显示区域180能够更加清晰地呈现数字内容。在电子显示区域180未通电时,呈现设备130将被呈现为完整的镜面区域170。
基于这样的布置方式,镜面170可以实时地通过镜面反射来显示对象110,电子显示区域180可以呈现对象110的外表评价160。通过这样的方式,可以减少电子显示区域180的尺寸,从而降低电子设备120的成本,并减少电子设备120的能耗。
在一些实现中,电子设备120所提供的外表评价160例如可以是针对对象皮肤的评价,包括但不限于:皮肤的整体评价、皮肤色斑评价、毛孔评价、皱纹评价、红区评价、色斑评价、痘评价、黑眼圈评价、黑头评价等。备选地,外表评价也可以是针对对象的外貌评分。例如,外表评价可以是针对对象的脸部的“颜值”评分。或者,外表评价也可以包括针对对象脸部五官的“颜值”评分。
应当理解,还可以提供其他任何适当的方式来提供外表评价160。例如,电子设备120可以通过语音的方式播放外表评价160,或者电子设备120也可以通过邮件、短信或其他通信方式将外表评价160发送至对象110。关于外表评价160的生成过程将在下文详细讨论, 在此暂不详叙。
在一些实现中,对象110是指使用电子设备120来进行外表评价的用户。在用户朝向电子设备120时,电子设备120能够利用多个摄像头140采集用户特定区域的照片。例如,当外表评价是关于人脸皮肤的评价时,电子设备120可以利用摄像头140来采集用户不同角度的脸部照片,并通过对这些照片执行对应的分析来生成外表评价160。
在另一些实现中,对象110也可以是其他适当的生物,例如,猫或狗等宠物。在宠物被置于电子设备120前时,电子设备120可以采集宠物的不同角度的照片,并提供宠物的外表评价。应当理解,提供宠物的外表评价过程是与提供用户的外表评价过程类似地,为了方便描述,下文中将以用户作为示例来描述本公开的方案。
应当理解,图1A和图1B中所示的摄像头的具体数目、安装位置、电子设备的形状等布置均是示意性地,不旨在作为本公开的限制。
电子设备的变型
以上结合图1A和图1B描述了本公开的实施例能够实施的环境100,并给出了示例性的电子设备120。图1A和图1B中的电子设备120被实施为镜子的形式,以下将结合图2、图3A-图3C和图4A-图4D来介绍电子设备120的其他变型。
图2示出了根据本公开的另一些实施例的示例电子设备200。如图2所示,电子设备200包括物理上分离的呈现设备210和图像采集设备220,其中图像采集设备220可以包括多个摄像头230-1、230-2和230-3(单独或统一称为摄像头230)。在一些实现中,图像采集设备220与呈现设备210物理上分离,并与呈现设备210有线或者无线连接240。在使用过程中,电子设备200可以通过图像采集设备220来获取对象的图像,并例如可以通过呈现设备210来提供外表评价250。
在一些实现中,用于控制图像采集和提供外表评价的处理单元可以被设置呈现设备210和图像采集设备220中任一设备中。示例性地,处理单元可以被设置在呈现设备210(例如,具有处理能力的终端设备)中,电子设备200可以利用该处理单元来向图像采集设备220发送利用摄像头230采集图像的指令。随后,图像采集设备220可以利用所设置的通信部件来向呈现设备210提供所采集的图像,并利用该处理单元对所采集的图像执行分析以确定外表评价,并经由呈现设备210来提供外表评价。备选地,电子设备200也可以例如将所采集的图像发送至远程计算设备(例如,服务器)以获取外表评价。
备选地,处理单元也可以被设置在图像采集设备220中,电子设备200可以利用该处理单元来向摄像头230发送采集图像的指令,并通过处理单元对所采集的图像执行而分析以确定外表评价,并随后通过通信部件来向呈现设备210提供所确定的外表评价,以用于向用户呈现。
在一些实现中,图像采集设备220可以被布置为按照特定位置设置的一组摄像头230。示例性地,一组摄像头230可以被集成在同一外壳内以形成一体的图像采集设备220。备选地,图像采集设备220也可以是指一组分离的摄像头230的统称,该组分离地摄像头230可以单独地与呈现设备210和/或处理单元通信,或者统一地与呈现设备210和/或处理单元通信。
应当理解,图2中所示出的摄像头230的数目和布置方式仅是示意性的。摄像头230的布置方式可以类似于如上文结合图1所讨论的摄像头140,在此不再重复描述。通过将图像采集设备220设置为与呈现设备210分离的独立设备,本公开的方案例如可以将图像采集设 备220构建为移动终端可用的附件,从而充分地利用现有移动终端的计算和呈现能力。
图3A至图3C示出了根据本公开另一些实施例的示例电子设备。
图3A示出了根据本公开另一些实施例的示例电子设备300A。如图3A所示,电子设备300A可以被实施为具有多个前置摄像头305-1、305-2和305-3(单独或统一称为前置摄像头305)的移动设备310。如图3A所示,这些前置摄像头305被设置为彼此具有至少预定的距离,以使得不同的前置摄像头305能够采集对象的不同区域的图像。进一步地,电子设备310可以利用所采集的图像执行对应的分析,以生成对象的外表评价。电子设备310还可以利用显示屏幕来呈现外表评价。
在使用过程中,用户例如可以手持移动设备310,并运行移动设备310上所安装的应用程序,该应用程序能够使得移动设备310发出利用多个前置摄像头305采集用户的图像的指令,并对所采集的图像执行对应的图像分析以生成外表评价。随后,该应用程序还可以通过在电子设备320的显示屏幕所显示的图像界面来呈现所生成的外表评价。通过这样的方式,本公开的方案能够利用前置摄像头满足分布的现有移动设备来执行本公开的外表评价提供方法。
图3B示出了根据本公开另一些实施例的示例电子设备300B。如图3B所示,电子设备300B包括具有前置摄像头315-3的移动设备320,以及附接到移动设备320的摄像头315-1和摄像头315-2。应当理解,附接是指将独立于移动设备的摄像头通过适当的方式(例如,可拆卸或者不可拆卸)固定到移动设备。
如图3B所示,摄像头315-1和摄像头315-2可以被对称地布置在移动设备320的两侧,以更为全面地采集对象的图像。此外,摄像头315-1和摄像头315-2可以通过有线或无线连接而与移动设备320通信地耦合,以使得移动设备320能够获取摄像头315-1和摄像头315-2所采集的图像。进一步地,电子设备320可以利用前置摄像头315-3以及通信耦合的摄像头315-1和315-2所采集的图像执行对应的分析,以生成对象的外表评价。电子设备320还可以利用显示屏幕来呈现外表评价。
在使用过程中,用户可以将作为独立附件的摄像头315-1和摄像头315-2通过例如卡扣等方式固定到用户的移动设备320,并建立摄像头315-1和315-2与移动设备320之间的USB连接或者蓝牙连接。进一步地,用户例如可以手持移动设备320,并可以运行移动设备320上所安装的应用程序,该应用程序能够检测到移动设备320所包括的前置摄像头315-3以与移动设备320通信耦合的摄像头315-1和315-2。进一步地,该应用程序能够使得移动设备320发出利用摄像头315-1、315-2和315-3采集用户的图像的指令,并对所采集的图像执行对应的图像分析以生成外表评价。随后,该应用程序还可以通过在电子设备320的显示屏幕所显示的图像界面来呈现所生成的外表评价。
考虑到目前大量的移动设备包括居于中间位置的前置摄像头,通过将额外的摄像头作为附件方式进行提供,本公开的方案能够提高设备的可便携性,使得用户能够快速方便地获得外表评价。
图3C示出了根据本公开另一些实施例的示例电子设备300C。如图3C所示,电子设备300B包括移动设备330,以及附接到移动设备330的摄像头325-1、摄像头325-2和摄像头325-3(单独或统一称为摄像头325)。
如图3C所示,摄像头325-1和摄像头325-2可以被对称地布置在移动设备330的左右两侧,摄像头315-3可以被布置在移动设备330的上方。这样的布置使得摄像头325能够更为 全面地采集对象的图像。此外,摄像头325可以通过有线或无线连接而与移动设备330通信地耦合,以使得移动设备330能够获取摄像头325所采集的图像。进一步地,电子设备320可以利用通信耦合的摄像头325所采集的图像执行对应的分析,以生成对象的外表评价。电子设备330还可以利用显示屏幕来呈现外表评价。
在使用过程中,用户可以将作为独立附件的摄像头325-1、325-2和325-3通过例如卡扣等方式固定到用户的移动设备330,并建立多个摄像头325与移动设备330之间的USB连接或者蓝牙连接。进一步地,用户例如可以手持移动设备330,并可以运行移动设备330上所安装的应用程序,该应用程序能够检测到与移动设备330通信耦合的多个摄像头325。进一步地,该应用程序能够使得移动设备330发出利用摄像头325采集用户的图像的指令,并对所采集的图像执行对应的图像分析以生成外表评价。随后,该应用程序还可以通过在电子设备330的显示屏幕所显示的图像界面来呈现所生成的外表评价。
由于部分移动设备不具有前置摄像头或者其前置摄像头的位置不够理想,通过提供作为独立附件的多个摄像头,本公开的方案能够进一步提高方案的移动设备兼容性。
图4A至图4D示出了根据本公开又一些实施例的示例电子设备。
图4A示出了根据本公开另一些实施例的示例电子设备410。如图4A所示,电子设备410可以被实施具有多个后置摄像头415-1、415-2和415-3(单独或统一称为后置摄像头415)的移动设备。如图4A所示,这些后置摄像头415被设置为彼此具有至少预定的距离,以使得不同的后置摄像头415能够采集对象的不同区域的图像。进一步地,电子设备410可以利用所采集的图像执行对应的分析,以生成对象的外表评价。在完成评价后,如图4B所示,电子设备410还可以利用移动设备正面的显示屏幕425来呈现外表评价。
图4C示出了根据本公开另一些实施例的示例电子设备430。如图4C所示,电子设备430包括具有后置摄像头435的移动设备,以及附接到移动设备的摄像头440-1和摄像头440-2。
如图4C所示,摄像头440-1和摄像头440-2可以被对称地布置在移动设备的两侧,以更为全面地采集对象的图像。此外,摄像头440-1和摄像头440-2可以通过有线或无线连接而与移动设备通信地耦合,以使得移动设备能够获取摄像头440-1和摄像头440-2所采集的图像。进一步地,电子设备430可以利用后置摄像头435以及通信耦合的摄像头440-1和440-2所采集的图像执行对应的分析,以生成对象的外表评价。电子设备430还可以利用移动设备正面的显示屏幕来呈现外表评价(图中未示出)。
图4D示出了根据本公开另一些实施例的示例电子设备450。如图4D所示,电子设备450包括移动设备,以及附接到移动设备的摄像头455-1、摄像头455-2和摄像头455-3(单独或统一称为摄像头455)。
如图4D所示,摄像头455-1和摄像头455-2可以被对称地布置在移动设备的左右两侧,摄像头455-3可以被布置在移动设备的上方。这样的布置使得摄像头455能够更为全面地采集对象的图像。此外,摄像头455可以通过有线或无线连接而与移动设备通信地耦合,以使得移动设备能够获取摄像头455所采集的图像。进一步地,移动设备可以利用通信耦合的摄像头455所采集的图像执行对应的分析,以生成对象的外表评价。电子设备455还可以利用移动设备正面的显示屏幕来呈现外表评价。
考虑到目前市场中,与前置摄像头相比,后置摄像头能够获得具有更高分辨率的图像,通过使用后置摄像头,本公开的方案能够进一步提高外表评价的准确性。在使用过程中,待 评价的用户可以自己手持移动设备。另外,由于显示屏幕位于摄像头的背面,基于这样的布置,本公开的方案还能够使得其他用户能够便捷地手持移动设备来采集待评价的用户的图像。
以上介绍了本公开的电子设备的若干变型,应当理解,在不违反本公开精神的情况下,还可以采用其他适当的电子设备。
外表评价的生成
以下将结合图5和图6A至图6F来描述生成对象的外观评价的过程。以下过程可以由结合图1至图4所描述的任意电子设备来实施。仅是为了方便描述,下文以图1所示的环境100作为示例来描述生成外观评价的具体过程。
在一些实现中,在控制摄像头140捕获对应的图像之前,电子设备120可以确定对象110的位置或者姿态是否符合摄像头140的图像采集条件。示例性地,图像采集条件例如可以是对象110相对于摄像头140的角度范围或者距离范围。
电子设备120可以确定对象110相对于各摄像头140的角度或距离是否符合预设的图像采集条件,并在确定图像采集条件未被满足时向对象110提供调整位置或者姿态的提示。示例性地,在开始图像采集后,如果确定对象110距离摄像头140距离过远,电子设备120可以通过语音提醒对象110调整位置。例如,电子设备120可以通过语音提醒用户“请靠近一些”。或者,当对象110的脸部例如过于朝向一侧的摄像头140-1时,电子设备120还可以利用呈现设备130中的对应视觉效果来提醒对象110调整姿态。例如,电子设备120可以在呈现设备130的显示区域呈现一个虚拟人物,并通过虚拟人物的动画来提醒对象110应当朝某个方向调整脸部朝向。附加地,电子设备120例如还可以在呈现动画的同时还提供语音提醒,以使得对象110能够更为明确该如何调整姿态或者位置。
在一些实现中,如果确定对象110的位置或者姿态符合摄像140的图像采集条件,电子设备120还可以提醒对象110保持其位置或姿态。例如,电子设备120可以通过语音来提醒对象保持位置或姿态以进行图像采集。备选地或附加地,电子设备120也可以在确定图像采集条件被满足时,设置定时器以在预定时间后开始采集图像。例如,电子设备120可以通过语音信息提醒用户保持姿态,并告知对象110将在3秒钟后开始采集图像。
在一些实现中,为了提高所采集的图像的质量,在控制摄像头140捕获对应的图像之前,电子设备120还可以根据对象110的特性来调整一个或多个摄像头140的焦距。示例性地,以采集对象110的人脸图像作为示例,由于每个人的人脸可能具有不同的弧度,因此摄像头能够获得质量较佳采集的图像的焦距也是不同的。电子设备120例如可以根据对象110的脸的弧度来动态地调整摄像头140的焦距,以使得所拍摄的图像更为清晰。应当理解,可以采用本领域任何适当的动态对焦技术来实现焦距的调整,具体细节在此不再详叙。
此外,以采集对象110的人脸图像作为示例,由于人脸具有一定弧度,在固定摄像头位置的情况下,不同拍摄角度也会导致所采集的图像的质量差异。为了获取更高质量的图像,在一些实现中,摄像头140中的一个或多个摄像头的朝向可以被不同地布置。
例如,图5示出了根据本公开的一些实施例的摄像头的布置的示意图500。如图5所示,摄像头140-1被设置为与电子设备120的表面具有角度510,以使得能够更好地捕获对象110的右脸图像,摄像头140-2被设置为与电子设备120的表面具有角度520,以使得能够更好地捕获对象110的左脸图像。在一些实现中,当摄像头140-1和140-2相对于电子设备120被对 称地布置时,角度510可以与角度520相同。例如,角度510和角度520可以是10°与30°范围内的任意角度。
在一些实现中,考虑到不同对象的人脸弧度可能差异较大,预先设置的角度可能不适用于某些对象。在一些实现中,摄像头140还可以相对于电子设备120动态地调整拍摄角度。例如,电子设备120可以根据对象110的特性来调整一个或多个摄像头140的拍摄角度。示例性地,继续图5的示例,电子设备120例如可以根据对象110的右脸的弧度来调整摄像头140-1的拍摄角度510,并根据对象110的左脸的弧度来调整摄像头140-3的拍摄角度520,以使得所拍摄的图像更为清晰。应当理解,可以采用任何适当的驱动结构来调整摄像头的角度,本公开不旨在对此进行限定。
在一些实现中,当确定对象的位置或姿态满足了图像采集条件时,并且电子设备根据对象的特性调整了摄像头的朝向后,电子设备可以发出指令以使得摄像头采集对象的图像。图6A、图6B和图6C分别示出了由不同摄像头采集的对象的不同图像。图像610(为了方便描述,称为第一图像)可以是由摄像头140-1所捕获的对象110的左脸颊图像,图像630(为了方便描述,称为第二图像)可以是由摄像头140-3所捕获的右脸颊图像,图像620(为了方便描述,称为第三图像)可以是由摄像头140-2所捕获的正脸图像。
在获取由不同摄像头拍摄的多个图像后,电子设备120可以利用所获取的多个图像来执行外表分析。具体地,电子设备120首先可以根据需要提供的外表评价的类型来确定多个图像中对应的兴趣区域。在一些实现中,电子设备120可以预先建立外表评价的类型、摄像头的位置和对应的兴趣区域的关联关系。示例性地,电子设备120可以预先存储“脸部毛孔状态评价”、“摄像头140-1”和对应的兴趣区域的映射。
在一些实现中,兴趣区域可以被表示为多个特征点所围成的区域。电子设备120可以通过存储这些特征点的描述来指示对应的兴趣区域。这样的特征点描述能够使得电子设备120能够从对应图像中标识出与该特征点对应的图像位置。
以图6A作为示例,在需要提供的外表评价例如是关于“脸部毛孔状态”的评价时,电子设备120可以根据预先存储的映射来确定与“脸部毛孔状态”以及摄像头140-1所对应的多个特征点描述,并中利用特征点识别技术从所采集的图像中检测出对应的特征点605。随后,电子设备120可以根据所检测出的多个特征点605来从第一图像610确定对应的兴趣区域615(为了方便描述,称为第一兴趣区域)。基于类似的方式,电子设备120可以从第二图像630中确定对应的兴趣区域635(为了方便描述,称为第二兴趣区域),并从第三图像620中确定对应的兴趣区域625(为了方便描述,称为第三兴趣区域)。在图6A的示例中,第一兴趣区域615、第二兴趣区域635和第三兴趣区域625分别表征了对象不同部位所对应的一组外表特征(在该示例中,即,毛孔特征)。
本文所讨论的外表特征可以包括但不限于:皮肤色斑特征、皱纹特征、红区特征、色斑特征、痘特征、黑眼圈特征和黑头特征等。当所分析的外表特征不同时,其在不同图像中所对应的兴趣区域也会相应地改变。例如,黑头的所对应的兴趣区域主要包括对象的鼻子部位。
随后,电子设备120可以基于第一兴趣区域615、第二兴趣区域635和第三兴趣区域625来确定最终的外表评价。例如,电子设备120可以对第一兴趣区域615、第二兴趣区域635和第三兴趣区域625分别进行对应的图像分析,以获得与不同兴趣区域对应的区域外表评价结果。随后,电子设备120可以通过融合多个区域外表评价结合来提供总的外表评价。
在一些实现中,由于不同摄像头所采集的图像可能存在重叠,在融合多个区域外表评价 时,计算设备120还需要考虑多个兴趣区域是否存在重叠,以避免重复计算而导致结果不准确。
示例性地,不同兴趣区域之间的对应关系可以被预先存储在电子设备120中。例如,如6D、6E和6F所示,可以预先存储对应关系以指示:第一兴趣区域615中的区域640-1是非重叠区域,区域640-2是与第三兴趣区域625中的区域645-1重叠的区域;第二兴趣区域635中的区域650-2是非重叠区域,区域650-1是与第三兴趣区域625中的区域645-3重叠的区域;第三兴趣区域625中的区域645-2是非重叠区域。应当理解,可以采用任何适当的方式来存储这样的区域对应关系,例如,可以通过存储重叠区域在不同兴趣区域中的特征点来维护区域的对应关系。
以“脸部毛孔状态”作为示例,电子设备120可以利用不同的策略来处理重叠区域和非重叠区域。例如,在利用脸部毛孔数目来体现“脸部毛孔状态”时,电子设备120可以通过图像处理先确定区域640-1、区域640-2、区域650-1、区域650-2、区域645-1、区域645-2和区域645-3中的毛孔数量。随后,在对多个兴趣区域的评价进行融合时,电子设备120例如可以确定基于第一图像610所确定的区域640-2的毛孔数量和基于第三图像620所确定的区域645-1的毛孔数量的平均值(为了方便描述,称为第一平均值),并确定基于第二图像630所确定的区域650-1的毛孔数量和基于第三图像620所确定的区域645-3的毛孔数量的平均值(为了方便描述,称为第一平均值),并将对象110的总的毛孔数量确定为基于区域640-1所确定的毛孔数量、第一平均值、基于区域645-2所确定的毛孔数量、第二平均值和基于区域650-2所确定的毛孔数量的总和。例如,总毛孔数量可以表示为:
总毛孔数量=R1+L1+F1+(R0+FR)/2+(L0+FL)/2   (1)
其中,R1表示区域610-1中的毛孔数量,L1表示区域650-2中的毛孔数量,F1表示区域645-2中的毛孔数量,R0表示区域640-中的毛孔数量,FR表示区域645-1中的毛孔数量,L0表示区域650-1中的毛孔数量,FL表示区域645-3中的毛孔数量。通过这样的方式,电子设备120可以准确地对象110的全脸的毛孔数量,从而提高了外表分析的准确性。
应当理解,“脸部毛孔数量”可以是用于确定“脸部毛孔状态”评价中的一项数据。在确定关于“脸部毛孔状态”的评价时,电子设备120还可以利用类似的方法来确定例如“毛孔面积占比”、“毛孔大小”和“毛孔颜色深浅”等其他数据,并基于这些数据来确定最终的“脸部毛孔状态”。
以上介绍了兴趣区域存在重叠的外表评价确定过程,在一些示例中,某些外表评价的确定可能不涉及到重叠区域。例如,在确定用户的鱼尾纹评价时,可以基于不同摄像头分别采集的左眼图像和右眼图像所对应的评价结果,来直接确定最终的鱼尾纹评价。
本公开适用的皮肤评价的示例还可以包括:皮肤色斑评价、皱纹评价、红区评价、色斑评价、痘评价、黑眼圈评价、黑头评价、其他可以利用图像分析确定的皮肤评价或者以上中任意项的组合。应当理解,可以利用上文所讨论的方式来确定这些评价,在此不再详叙。
在一些实现中,电子设备120还可以利用上文所讨论的方式来确定不同区域所对应的外貌得分(例如,颜值评分),并利用基于重叠区域的融合方法来确定最终的外貌得分,在此不再详叙。
应当理解,上文虽然以三个摄像头所采集的图像作为示例描述了生成外表评价的过程,但是本公开可以基于类似的方式来对任意数目的多个图像(例如,2个图像或者大于3个图像)来执行分析,并融合基于各图像确定的外表评价来确定对象的总的外表评价。
外表评价的提供
如参考图1所讨论的,电子设备120可以利用呈现设备130来提供外表评价160。在一些实现中,电子设备120可以利用呈现设备130通过图形界面150来呈现外表评价160。
备选地,电子设备120也可以通过其他媒体形式来提供外表评价。例如,电子设备120可以通过电子邮件、短信或其他通信方式来将外表评价发送至对象110的邮箱。
在一些实现中,为了让对象110能够更为直观地查看外表评价160,电子设备120还可以根据摄像头所采集的多个图像来生成对象110的三维模型,并利用呈现设备130来同时显示三维模型与外表评价。图7示出了根据本公开的一些实施例的示例图形界面700。如图7所示,图形界面700包括三维模型710,以及多个外表评价720-1、720-2和730。应当理解,图7中所示的三维模型710只是示意性的,所呈现的三维模型710可以准确地呈现对象不同区域的皮肤图像。
在一些实现中,用户例如还可以通过执行特定操作来控制图形界面700的显示。例如,用户可以通过使用触摸屏上的划动或拖拽操作来改变三维模型710的呈现角度,以方便用户查看不同区域的皮肤状态。
在一些实现中,除了在三维图像710外的区域显示皮肤的整体外表评价730“皮肤得分:80”,为了方便对象110更为直观地查看各评价所对应的皮肤区域,电子设备120还可以在三维模型710的不同位置处呈现对应的外表评价。例如,电子设备120可以在三维模型的眼睛尾部呈现与“鱼尾纹评价”对应的外表评价720-1“鱼尾纹得分:75”,并在法令纹附近呈现与“法令纹评价”对应的外表评价720-2“法令纹得分:85分布”。示例性地,电子设备120还可以将外表评价作为纹理在三维模型710的对应位置进行贴图,以使得对象110能够更为直观地查看外表评价。
示例过程
图8示出了根据本公开的实施例的示例外表分析过程800的流程图。过程800例如可以由参考图1至图4描述的任意电子设备来实施。
在框802,电子设备获取与对象的第一区域相关联的第一图像,第一图像是由第一摄像头采集的。在框804,电子设备获取与对象的第二区域相关联的第二图像,第二图像是由第二摄像头采集的,其中,第一区域不同于第二区域。在框806,电子设备提供对象的外表评价,其中外表评价是基于第一图像和第二图像所确定的。
图9示出了根据本公开的实施例的又一示例外表分析的过程900的流程图。过程900例如可以由参考图1至图4描述的任意电子设备来实施。
如图9所示,在框902,在用户的姿态或位置不满足图像采集条件时,电子设备可以提示用户调整姿态或位置。在904,在确定用户的姿态和/或位置满足图像采集条件是,电子设备可以向多个相机发送拍照指令。在框906、914和922,第一相机、第三相机和第二相机分别执行拍照动作。相应地,在框908、916和924,电子设备可以获取分别由第一相机、第三相机和第二相机采集的左脸图像、正脸图像和右脸图像。
在框910、918和926,根据所要进行皮肤检测的类型,电子设备可以分别对左脸图像、正脸图像和右脸图像执行特征点检测和ROI提取。在框912、920和928,电子设备可以分别 对所提取的左脸ROI、正脸ROI和右脸ROI执行皮肤检测,以获得左脸皮肤检测结果、正脸皮肤检测结果和右脸皮肤检测结果。
在框930,电子设备可以采用参考上文所描述的结果融合方法来对左脸皮肤检测结果、正脸皮肤检测结果和右脸皮肤检测结果进行融合,以确定用户的最终皮肤检测结果。在框934,电子设备可以通过屏幕来呈现皮肤检测结果。在936,电子设备还可以通过短信、电子邮件和打印成书面报告等方式来输出皮肤评价。
示例外表分析装置
图10示出了根据本公开的一个实施例的示例外表分析装置1000。示例外表分析装置1000可以被实现为一个或多个软件引擎,硬件组件或其组合等,其被配置有用于实现对应模块的功能的逻辑。
如图10所示,外表分析装置1000可以包括第一图像获取单元1010、第二图像获取单元1020和评价提供单元1030。具体地,第一图像获取单元1010被配置为获取与对象的第一区域相关联的第一图像,其中第一图像是由第一摄像头采集的。第二图像获取单元1020被配置为获取与对象的第二区域相关联的第二图像,第二图像是由第二摄像头采集的,其中,第一区域不同于第二区域。评价提供单元1030被配置为提供对象的外表评价,其中外表评价是基于第一图像和第二图像所确定的。
在一些实现中,外表分析装置1000还包括:对象提示单元,被配置为:如果对象的位置或姿态不符合第一摄像头或者第二摄像头的图像采集条件,电子设备提示对象调整位置或姿态。
在一些实现中,外表分析装置1000还包括:摄像头调整单元,被配置为:基于对象的特性,电子设备调整第一摄像头和第二摄像头中的至少一个摄像头的采集参数,采集参数包括拍摄角度和焦距中的至少一项。
在一些实现中,第一摄像头和第二摄像头被对称地布置于图像采集设备的相对侧。
在一些实现中,电子设备还包括第三摄像头,外表分析装置1000还包括:第三图像获取单元,被配置为获取与对象的第三区域相关联的第三图像,第三图像是由第三摄像头采集的,其中第三摄像头被设置在图像采集设备上,并且与第一摄像头和第二摄像头的距离相同。
在一些实现中,外表分析装置1000还包括评价确定单元,其被配置为:从第一图像中确定第一兴趣区域,第一兴趣区域表征对象的第一组外表特征;从第二图像中确定第二兴趣区域,第二兴趣区域表征对象的第二组外表特征;以及至少基于第一兴趣区域和第二兴趣区域,确定针对对象的外表特征的外表评价。
在一些实现中,评价确定单元还被配置为:如果第一兴趣区域和第二兴趣区域包括重叠区域:基于第一兴趣区域,确定与重叠区域对应的第一外表评价;基于第二兴趣区域,确定与重叠区域对应的第二外表评价;以及基于第一外表评价和第二外表评价,确定对象的针对外表特征的外表评价。
在一些实现中,评价提供单元1030还被配置为:呈现对象的三维模型,其中三维模型是基于至少第一图像和第二图像所生成的;以及在三维模型的不同位置处呈现外表评价中的对应内容。
在一些实现中,外表评价包括皮肤评价和外貌评分中的至少一项。
示例设备
图11是本申请实施例提供的电子设备1100的结构示意图。电子设备1100可以是上文参考图1至图4所讨论的任意电子设备。
电子设备1100可以包括处理器1110,外部存储器接口1120,内部存储器1121,通用串行总线(universal serial bus,USB)接口1130,充电管理模块1140,电源管理模块1141,电池1142,天线1,天线2,移动通信模块1150,无线通信模块1160,音频模块1170,扬声器1170A,受话器1170B,麦克风1170C,耳机接口1170D,传感器模块1180,按键1190,马达1191,指示器1192,摄像头1193,显示屏1194,以及用户标识模块(subscriber identification module,SIM)卡接口1195等。其中传感器模块1180可以包括压力传感器1180A,陀螺仪传感器1180B,气压传感器1180C,磁传感器1180D,加速度传感器1180E,距离传感器1180F,接近光传感器1180G,指纹传感器1180H,温度传感器1180J,触摸传感器1180K,环境光传感器1180L,骨传导传感器1180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备1100的具体限定。在本申请另一些实施例中,电子设备1100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器1110可以包括一个或多个处理单元,例如:处理器1110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备1100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器1110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器1110中的存储器为高速缓冲存储器。该存储器可以保存处理器1110刚用过或循环使用的指令或数据。如果处理器1110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器1110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器1110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器1110可以包含多组I2C总线。处理器1110可以通过不同的I2C总线接口分别耦合触摸传感器1180K,充电器,闪光灯,摄像头1193等。例如:处理器1110可以通过I2C接口耦合触摸传感器1180K,使处理器1110与触摸传感器1180K通过I2C总线接口通信,实现电子设备1100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器1110可以包含多组I2S总线。处理器1110可以通过I2S总线与音频模块1170耦合,实现处理器1110与音频模块1170之间的通信。在一些实施例中,音频模块1170可以通过I2S接口向无线通信模块1160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块1170与无线通信模块1160可以通过PCM总线接口耦合。在一些实施例中,音频模块1170也可以通过PCM接口向无线通信模块1160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器1110与无线通信模块1160。例如:处理器1110通过UART接口与无线通信模块1160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块1170可以通过UART接口向无线通信模块1160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器1110与显示屏1194,摄像头1193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器1110和摄像头1193通过CSI接口通信,实现电子设备1100的拍摄功能。处理器1110和显示屏1194通过DSI接口通信,实现电子设备1100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器1110与摄像头1193,显示屏1194,无线通信模块1160,音频模块1170,传感器模块1180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口1130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口1130可以用于连接充电器为电子设备1100充电,也可以用于电子设备1100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备1100的结构限定。在本申请另一些实施例中,电子设备1100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块1140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块1140可以通过USB接口1130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块1140可以通过电子设备1100的无线充电线圈接收无线充电输入。充电管理模块1140为电池1142充电的同时,还可以通过电源管理模块1141为电子设备供电。
电源管理模块1141用于连接电池1142,充电管理模块1140与处理器1110。电源管理模块1141接收电池1142和/或充电管理模块1140的输入,为处理器1110,内部存储器1121,外部存储器,显示屏1194,摄像头1193,和无线通信模块1160等供电。电源管理模块1141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块1141也可以设置于处理器1110中。在另一些实施例中,电源管理模块1141和充电管理模块1140也可以设置于同一个器件中。
电子设备1100的无线通信功能可以通过天线1,天线2,移动通信模块1150,无线通信模块1160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备1100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块1150可以提供应用在电子设备1100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块1150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块1150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块1150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块1150的至少部分功能模块可以被设置于处理器1110中。在一些实施例中,移动通信模块1150的至少部分功能模块可以与处理器1110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器1170A,受话器1170B等)输出声音信号,或通过显示屏1194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器1110,与移动通信模块1150或其他功能模块设置在同一个器件中。
无线通信模块1160可以提供应用在电子设备1100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块1160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块1160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器1110。无线通信模块1160还可以从处理器1110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备1100的天线1和移动通信模块1150耦合,天线2和无线通信模块1160耦合,使得电子设备1100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备1100通过GPU,显示屏1194,以及应用处理器等实现显示功能。GPU为图像 处理的微处理器,连接显示屏1194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器1110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏1194用于显示图像,视频等。显示屏1194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备1100可以包括11个或N个显示屏1194,N为大于11的正整数。
电子设备1100可以通过ISP,摄像头1193,视频编解码器,GPU,显示屏1194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头1193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头1193中。
摄像头1193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备1100可以包括1个或N个摄像头1193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备1100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备1100可以支持一种或多种视频编解码器。这样,电子设备1100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备1100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口1120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备1100的存储能力。外部存储卡通过外部存储器接口1120与处理器1110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器1121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器1110通过运行存储在内部存储器1121的指令,从而执行电子设备1100的各种功能应用以及数据处理。内部存储器1121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备1100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器1121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备1100可以通过音频模块1170,扬声器1170A,受话器1170B,麦克风1170C,耳机接口1170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块1170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块1170还可以用于对音频信号编码和解码。在一些实施例中,音频模块1170可以设置于处理器1110中,或将音频模块1170的部分功能模块设置于处理器1110中。
扬声器1170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备1100可以通过扬声器1170A收听音乐,或收听免提通话。
受话器1170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备1100接听电话或语音信息时,可以通过将受话器1170B靠近人耳接听语音。
麦克风1170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风1170C发声,将声音信号输入到麦克风1170C。电子设备1100可以设置至少一个麦克风1170C。在另一些实施例中,电子设备1100可以设置两个麦克风1170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备1100还可以设置三个,四个或更多麦克风1170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口1170D用于连接有线耳机。耳机接口1170D可以是USB接口1130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器1180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器1180A可以设置于显示屏1194。压力传感器1180A
的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器1180A,电极之间的电容改变。电子设备1100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏1194,电子设备1100根据压力传感器1180A检测所述触摸操作强度。电子设备1100也可以根据压力传感器1180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器1180B可以用于确定电子设备1100的运动姿态。在一些实施例中,可以通过陀螺仪传感器1180B确定电子设备1100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器1180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器1180B检测电子设备1100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备1100的抖动,实现防抖。陀螺仪传感器1180B还可以用于导航,体感游戏场景。
气压传感器1180C用于测量气压。在一些实施例中,电子设备1100通过气压传感器1180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器1180D包括霍尔传感器。电子设备1100可以利用磁传感器1180D检测翻盖皮套的开合。在一些实施例中,当电子设备1100是翻盖机时,电子设备1100可以根据磁传感 器1180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器1180E可检测电子设备1100在各个方向上(一般为三轴)加速度的大小。当电子设备1100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器1180F,用于测量距离。电子设备1100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备1100可以利用距离传感器1180F测距以实现快速对焦。
接近光传感器1180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备1100通过发光二极管向外发射红外光。电子设备1100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备1100附近有物体。当检测到不充分的反射光时,电子设备1100可以确定电子设备1100附近没有物体。电子设备1100可以利用接近光传感器1180G检测用户手持电子设备1100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器1180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器1180L用于感知环境光亮度。电子设备1100可以根据感知的环境光亮度自适应调节显示屏1194亮度。环境光传感器1180L也可用于拍照时自动调节白平衡。环境光传感器1180L还可以与接近光传感器1180G配合,检测电子设备1100是否在口袋里,以防误触。
指纹传感器1180H用于采集指纹。电子设备1100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器1180J用于检测温度。在一些实施例中,电子设备1100利用温度传感器1180J检测的温度,执行温度处理策略。例如,当温度传感器1180J上报的温度超过阈值,电子设备1100执行降低位于温度传感器1180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备1100对电池1142加热,以避免低温导致电子设备1100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备1100对电池1142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器1180K,也称“触控面板”。触摸传感器1180K可以设置于显示屏1194,由触摸传感器1180K与显示屏1194组成触摸屏,也称“触控屏”。触摸传感器1180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏1194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器1180K也可以设置于电子设备1100的表面,与显示屏1194所处的位置不同。
骨传导传感器1180M可以获取振动信号。在一些实施例中,骨传导传感器1180M可以获取人体声部振动骨块的振动信号。骨传导传感器1180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器1180M也可以设置于耳机中,结合成骨传导耳机。音频模块1170可以基于所述骨传导传感器1180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器1180M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键1190包括开机键,音量键等。按键1190可以是机械按键。也可以是触摸式按键。电子设备1100可以接收按键输入,产生与电子设备1100的用户设置以及功能控制有关的键信号输入。
马达1191可以产生振动提示。马达1191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏1194不同区域的触摸操作,马达1191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器1192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口1195用于连接SIM卡。SIM卡可以通过插入SIM卡接口1195,或从SIM卡接口1195拔出,实现和电子设备1100的接触和分离。电子设备1100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口1195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口1195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口1195也可以兼容不同类型的SIM卡。SIM卡接口1195也可以兼容外部存储卡。电子设备1100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备1100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备1100中,不能和电子设备1100分离。
电子设备1100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备1100的软件结构。
图12是本申请实施例的电子设备1100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图12所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图12所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备1100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视 频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。

Claims (24)

  1. 一种外表分析方法,应用于电子设备,所述电子设备包括第一摄像头和第二摄像头,其特征在于,所述方法包括:
    获取与对象的第一区域相关联的第一图像,所述第一图像是由第一摄像头采集的;
    获取与所述对象的第二区域相关联的第二图像,所述第二图像是由第二摄像头采集的,其中,所述第一区域不同于所述第二区域;以及
    所述电子设备提供所述对象的外表评价,其中所述外表评价是基于所述第一图像和所述第二图像所确定的。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    如果所述对象的位置或姿态不符合所述第一摄像头或者所述第二摄像头的图像采集条件,所述电子设备提示所述对象调整位置或姿态。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    基于所述对象的特性,所述电子设备调整所述第一摄像头和所述第二摄像头中的至少一个摄像头的采集参数,所述采集参数包括拍摄角度和焦距中的至少一项。
  4. 根据权利要求1所述的方法,其特征在于,所述第一摄像头和所述第二摄像头被对称地布置于图像采集设备的相对侧。
  5. 根据权利要求4所述的方法,其特征在于,所述电子设备还包括第三摄像头,所述方法还包括:
    获取与所述对象的第三区域相关联的第三图像,所述第三图像是由所述第三摄像头采集的,其中所述第三摄像头被设置在所述图像采集设备上,并且与所述第一摄像头和所述第二摄像头的距离相同。
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括通过以下过程确定所述对象的外表评价:
    从所述第一图像中确定第一兴趣区域,所述第一兴趣区域表征所述对象的第一组外表特征;
    从所述第二图像中确定第二兴趣区域,所述第二兴趣区域表征所述对象的第二组外表特征;以及
    至少基于所述第一兴趣区域和所述第二兴趣区域,确定针对所述对象的外表特征的所述外表评价。
  7. 根据权利要求6所述的方法,其特征在于,至少基于所述第一兴趣区域和所述第二兴趣区域确定所述对象的所述外表特征的所述外表评价包括:
    如果所述第一兴趣区域和所述第二兴趣区域包括重叠区域:
    基于所述第一兴趣区域,确定与所述重叠区域对应的第一外表评价;
    基于所述第二兴趣区域,确定与所述重叠区域对应的第二外表评价;以及
    基于所述第一外表评价和所述第二外表评价,确定所述对象的针对所述外表特征的所述外表评价。
  8. 根据权利要求1所述的方法,其特征在于,提供关于所述对象的所述外表评价包括:
    呈现所述对象的三维模型,其中所述三维模型是基于至少所述第一图像和所述第二图像所生成的;以及
    在所述三维模型的不同位置处呈现所述外表评价中的对应内容。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述外表评价包括皮肤评价和外貌评分中的至少一项。
  10. 一种终端设备,包括:
    至少一个计算单元;
    至少一个存储器,所述至少一个存储器被耦合到所述至少一个计算单元并且存储用于由所述至少一个计算单元执行的指令,所述指令当由所述至少一个计算单元执行时,使得所述终端设备执行动作,所述动作包括:
    获取与对象的第一区域相关联的第一图像,所述第一图像是由第一摄像头采集的;
    获取与所述对象的第二区域相关联的第二图像,所述第二图像是由第二摄像头采集的,其中,所述第一区域不同于所述第二区域;以及
    提供所述对象的外表评价,其中所述外表评价是基于所述第一图像和所述第二图像所确定的。
  11. 根据权利要求10所述的终端设备,其特征在于,所述动作还包括:
    如果所述对象的位置或姿态不符合所述第一摄像头或者所述第二摄像头的图像采集条件,所述终端设备提示所述对象调整位置或姿态。
  12. 根据权利要求10所述的终端设备,其特征在于,所述动作还包括:
    基于所述对象的特性,所述终端设备调整所述第一摄像头和所述第二摄像头中的至少一个摄像头的采集参数,所述采集参数包括拍摄角度和焦距中的至少一项。
  13. 根据权利要求10所述的终端设备,其特征在于,所述动作还包括:
    获取与所述对象的第三区域相关联的第三图像,所述第三图像是由第三摄像头采集的,其中所述第三摄像头被设置在图像采集设备上,并且与所述第一摄像头和所述第二摄像头的距离相同。
  14. 根据权利要求10所述的终端设备,其特征在于,所述动作还包括通过以下过程确定所述对象的外表评价:
    从所述第一图像中确定第一兴趣区域,所述第一兴趣区域表征所述对象的第一组外表特征;
    从所述第二图像中确定第二兴趣区域,所述第二兴趣区域表征所述对象的第二组外表特征;以及
    至少基于所述第一兴趣区域和所述第二兴趣区域,确定针对所述对象的外表特征的所述外表评价。
  15. 根据权利要求14所述的终端设备,其特征在于,至少基于所述第一兴趣区域和所述第二兴趣区域确定所述对象的所述外表特征的所述外表评价包括:
    如果所述第一兴趣区域和所述第二兴趣区域包括重叠区域:
    基于所述第一兴趣区域,确定与所述重叠区域对应的第一外表评价;
    基于所述第二兴趣区域,确定与所述重叠区域对应的第二外表评价;以及
    基于所述第一外表评价和所述第二外表评价,确定所述对象的针对所述外表特征的所述外表评价。
  16. 根据权利要求10所述的终端设备,其特征在于,提供关于所述对象的所述外表评价包括:
    呈现所述对象的三维模型,其中所述三维模型是基于至少所述第一图像和所述第二图像所生成的;以及
    在所述三维模型的不同位置处呈现所述外表评价中的对应内容。
  17. 根据权利要求10-16中任一项所述的终端设备,其特征在于,所述外表评价包括皮肤评价和外貌评分中的至少一项。
  18. 一种图像采集设备,包括:
    第一摄像头,被配置为采集与对象的第一区域相关联的第一图像;
    第二摄像头,被配置为采集与所述对象的第二区域相关联的第二图像,其中,所述第一区域不同于所述第二区域;以及
    通信部件,被配置为向终端设备提供所述第一图像和所述第二图像,以用于确定所述对象的外表评价。
  19. 根据权利要求18所述的图像采集设备,其特征在于,所述第一摄像头和所述第二摄像头被对称地布置于所述图像采集设备的相对侧。
  20. 根据权利要求19所述的图像采集设备,其特征在于,所述图像采集设备还包括第三摄像头,所述第三摄像头被配置为采集与所述对象的第三区域相关联的第三图像,并且所述第三摄像头与所述第一摄像头和所述第二摄像头的距离相同,
    所述通信部件还被配置为向所述终端设备提供所述第三图像。
  21. 根据权利要求18-20中任一项所述的图像采集设备,其特征在于,所述外表评价包括皮肤评价和外貌评分中的至少一项。
  22. 一种外表分析系统,包括:
    根据权利要求10-17中任一项所述的终端设备;以及
    根据权利要求18-21中任一项所述的图像采集设备。
  23. 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现根据权利要求1-9中任一项所述的方法。
  24. 一种智能镜子,包括:
    第一摄像头;
    第二摄像头,所述第一摄像头和所述第二摄像头被对称地布置于所述智能镜子的相对侧;
    第三摄像头,所述第三摄像头与所述第一摄像头和所述第二摄像头的距离相同;
    至少一个计算单元;
    至少一个存储器,所述至少一个存储器被耦合到所述至少一个计算单元并且存储用于由所述至少一个计算单元执行的指令,所述指令当由所述至少一个计算单元执行时,使得所述智能镜子执行动作,所述动作包括:
    获取与对象的第一区域相关联的第一图像,所述第一图像是由第一摄像头采集的;
    获取与所述对象的第二区域相关联的第二图像,所述第二图像是由第二摄像头采集的,其中,所述第一区域不同于所述第二区域;
    获取与所述对象的第三区域相关联的第三图像,所述第三图像是由第三摄像头采集的;
    基于所述第一图像、所述第二图像和所述第三图像确定所述对象的外表评价;以及
    提供所述对象的所述外表评价,
    其中基于所述第一图像、所述第二图像和所述第三图像确定所述对象的外表评价包括:
    从所述第一图像中确定第一兴趣区域,所述第一兴趣区域表征所述对象的第一组外 表特征;
    从所述第二图像中确定第二兴趣区域,所述第二兴趣区域表征所述对象的第二组外表特征;
    从所述第三图像中确定第三兴趣区域,所述第三兴趣区域表征所述对象的第三组外表特征;以及
    至少基于所述第一兴趣区域、所述第二兴趣区域和所述第三兴趣区域,确定针对所述对象的外表特征的所述外表评价。
PCT/CN2021/106703 2020-07-24 2021-07-16 外表分析的方法和电子设备 WO2022017270A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21846466.7A EP4181014A4 (en) 2020-07-24 2021-07-16 APPEARANCE ANALYSIS METHOD AND ELECTRONIC DEVICE
US18/006,312 US20230298300A1 (en) 2020-07-24 2021-07-16 Appearance Analysis Method and Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010725551.X 2020-07-24
CN202010725551.XA CN113971823A (zh) 2020-07-24 2020-07-24 外表分析的方法和电子设备

Publications (1)

Publication Number Publication Date
WO2022017270A1 true WO2022017270A1 (zh) 2022-01-27

Family

ID=79585859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106703 WO2022017270A1 (zh) 2020-07-24 2021-07-16 外表分析的方法和电子设备

Country Status (4)

Country Link
US (1) US20230298300A1 (zh)
EP (1) EP4181014A4 (zh)
CN (1) CN113971823A (zh)
WO (1) WO2022017270A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11910103B2 (en) * 2020-07-28 2024-02-20 Eys3D Microelectronics, Co. Electronic system and image aggregation method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120133753A1 (en) * 2010-11-26 2012-05-31 Chuan-Yu Chang System, device, method, and computer program product for facial defect analysis using angular facial image
CN106126017A (zh) * 2016-06-20 2016-11-16 北京小米移动软件有限公司 智能识别方法、装置和终端设备
US20170345146A1 (en) * 2016-05-30 2017-11-30 Beijing Kuangshi Technology Co., Ltd. Liveness detection method and liveness detection system
CN108399364A (zh) * 2018-01-29 2018-08-14 杭州美界科技有限公司 一种主副摄像头设置的脸部状态评估方法
CN110045872A (zh) * 2019-04-25 2019-07-23 廖其锋 日用智能镜及使用方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104586364B (zh) * 2015-01-19 2019-07-09 武汉理工大学 一种肤质检测系统及方法
CN107145217A (zh) * 2016-03-01 2017-09-08 上海光巢信息技术有限公司 智能交互系统及智能交互方法
CN107437073A (zh) * 2017-07-19 2017-12-05 竹间智能科技(上海)有限公司 基于深度学习与生成对抗网路的人脸肤质分析方法及系统
CN109805673A (zh) * 2019-03-27 2019-05-28 李睿 一种智能镜子
CN110960036A (zh) * 2019-10-31 2020-04-07 北京蓝海达信科技有限公司 具有美肤美妆指导功能的智能镜系统及方法
CN111374489A (zh) * 2020-04-22 2020-07-07 深圳市锐吉电子科技有限公司 一种智能镜子测肤的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120133753A1 (en) * 2010-11-26 2012-05-31 Chuan-Yu Chang System, device, method, and computer program product for facial defect analysis using angular facial image
US20170345146A1 (en) * 2016-05-30 2017-11-30 Beijing Kuangshi Technology Co., Ltd. Liveness detection method and liveness detection system
CN106126017A (zh) * 2016-06-20 2016-11-16 北京小米移动软件有限公司 智能识别方法、装置和终端设备
CN108399364A (zh) * 2018-01-29 2018-08-14 杭州美界科技有限公司 一种主副摄像头设置的脸部状态评估方法
CN110045872A (zh) * 2019-04-25 2019-07-23 廖其锋 日用智能镜及使用方法

Also Published As

Publication number Publication date
CN113971823A (zh) 2022-01-25
EP4181014A4 (en) 2023-10-25
EP4181014A1 (en) 2023-05-17
US20230298300A1 (en) 2023-09-21

Similar Documents

Publication Publication Date Title
WO2021213120A1 (zh) 投屏方法、装置和电子设备
WO2020259452A1 (zh) 一种移动终端的全屏显示方法及设备
CN110114747B (zh) 一种通知处理方法及电子设备
US20230276014A1 (en) Photographing method and electronic device
WO2020029306A1 (zh) 一种图像拍摄方法及电子设备
WO2022127787A1 (zh) 一种图像显示的方法及电子设备
WO2022017261A1 (zh) 图像合成方法和电子设备
US20230041696A1 (en) Image Syntheis Method, Electronic Device, and Non-Transitory Computer-Readable Storage Medium
CN110248037B (zh) 一种身份证件扫描方法及装置
WO2022001258A1 (zh) 多屏显示方法、装置、终端设备及存储介质
WO2022042766A1 (zh) 信息显示方法、终端设备及计算机可读存储介质
CN111103922A (zh) 摄像头、电子设备和身份验证方法
CN115589051B (zh) 充电方法和终端设备
CN113452945A (zh) 分享应用界面的方法、装置、电子设备及可读存储介质
CN114242037A (zh) 一种虚拟人物生成方法及其装置
CN110138999B (zh) 一种用于移动终端的证件扫描方法及装置
CN115484380A (zh) 拍摄方法、图形用户界面及电子设备
WO2022206494A1 (zh) 目标跟踪方法及其装置
WO2022105702A1 (zh) 保存图像的方法及电子设备
CN112449101A (zh) 一种拍摄方法及电子设备
US20230162529A1 (en) Eye bag detection method and apparatus
CN114222020B (zh) 位置关系识别方法、设备及可读存储介质
WO2022017270A1 (zh) 外表分析的方法和电子设备
WO2022078116A1 (zh) 笔刷效果图生成方法、图像编辑方法、设备和存储介质
WO2022033344A1 (zh) 视频防抖方法、终端设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21846466

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021846466

Country of ref document: EP

Effective date: 20230207