CN116529779A - Electronic device, method of controlling electronic device, and computer-readable storage medium - Google Patents

Electronic device, method of controlling electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN116529779A
CN116529779A CN202080106203.2A CN202080106203A CN116529779A CN 116529779 A CN116529779 A CN 116529779A CN 202080106203 A CN202080106203 A CN 202080106203A CN 116529779 A CN116529779 A CN 116529779A
Authority
CN
China
Prior art keywords
camera image
processor
electronic device
acquisition process
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080106203.2A
Other languages
Chinese (zh)
Inventor
小林大真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN116529779A publication Critical patent/CN116529779A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

An electronic device includes: an imaging module that takes a photograph of a subject and obtains a camera image; and a processor that controls the imaging module and acquires a camera image.

Description

Electronic device, method of controlling electronic device, and computer-readable storage medium
Technical Field
The present invention relates to an electronic device, a method of controlling the same, and a computer-readable storage medium.
Background
Conventionally, there are electronic devices (e.g., smartphones) equipped with a digital camera that captures an object (e.g., a person).
If the user and the subject person are not identical, the user has to ask the subject to assist in adjusting the composition of the image to suit his/her preferences. In this case, the photo may not reflect the preference of the object.
On the other hand, if the user and the subject are the same, manual adjustments to composition and image processing (color editing, etc.) are required to accommodate their own preferences.
Therefore, it is required that an electronic device such as a smart phone can be used to take a picture conforming to the taste of an object without giving a special instruction to the object.
Disclosure of Invention
The present disclosure is directed to solving at least one of the above-mentioned technical problems. Accordingly, there is a need for providing an electronic device and a method of controlling the same.
According to the present disclosure, an electronic device includes:
an imaging module that takes a photograph of a subject and obtains a camera image; and
a processor that controls the imaging module and acquires a camera image, wherein,
the processor performs a first acquisition process for acquiring a reference camera image, including an image of the subject,
the processor performs a detection process for detecting appearance information related to the appearance of the object from the reference camera image,
the processor performs an analysis process for analyzing the detected appearance information of the object and determines the content of a second acquisition process for acquiring a final camera image including the object based on the analysis result, an
The processor performs a feedback process for feeding back the content of the second acquisition process determined by the analysis process to a user operating the electronic device.
In the electronic device, wherein the second acquisition process includes at least one of a framing process, a filtering process, and an image processing process,
Wherein the framing procedure sets a recommended composition for photographing the subject to acquire a final camera image,
wherein the filtering process applies an image filter to the reference camera image or the camera image at the time of photographing to obtain a final camera image, and
wherein the image processing procedure applies image processing to the reference camera image or the camera image at the time of photographing to acquire a final camera image.
In the electronic device, the processor feeds back the content of the second acquisition process to the user through display output or voice output in the feedback process.
In an electronic device, wherein during the detecting, the processor detects item information about an item overlapping the object from the reference camera image, and
wherein, in the analysis process, the processor analyzes the appearance information including the article information, and determines the content of the second acquisition process based on the analysis result.
In an electronic device, wherein during the detecting, the processor further detects background information related to a background of the object separation from the reference camera image, and
during the analysis process, the processor analyzes the external information and the background information, and determines the content of the second acquisition process based on the analysis result.
In an electronic device, wherein during the detection process, the processor also detects additional information from outside the electronic device, and
in the analysis process, the processor analyzes the external information and the additional information, and determines the content of the second acquisition process based on the analysis result.
In the electronic device, wherein in the first acquisition process, the processor acquires user intention information in response to a user operation input, an
Wherein, in the analysis process, the processor determines the content of the second acquisition process based on the analysis result and the user intention information.
In an electronic device, wherein the imaging module is designed to acquire distance depth information also by imaging an object, and
wherein, in the first acquisition process, the processor acquires the three-dimensional shape of the clothing of the subject based on the distance depth information.
In an electronic apparatus, wherein in the analyzing process, a processor performs a process for analyzing taste of an object based on a detection result of appearance information of the object, and
wherein the processor performs a process of determining the content of the second acquisition process for acquiring the final camera image based on the analyzed taste of the object.
In the electronic device, wherein in the analysis process, the processor performs a process of determining the content of the second acquisition process for acquiring the final camera image based on the detection result of the appearance information of the object by using a machine-learned reasoner in advance.
In the electronic device, wherein after the feedback process, the processor automatically photographs the subject by controlling the imaging module based on the content of the second acquisition process to acquire a final camera image including the subject.
In the electronic device, wherein after the feedback process, the processor automatically performs image processing on the reference camera image by based on the content of the second acquisition process to acquire a final camera image including the object.
In the electronic device, wherein after the feedback process, in response to an operation input of the user, the processor acquires a final camera image including the subject by controlling the imaging module to capture the subject based on the content of the second acquisition process.
In the electronic device, wherein the processor acquires the final camera image including the object by performing image processing on the reference camera image based on the content of the second acquisition process in response to an operation input of the user after the feedback process.
In an electronic device, wherein the imaging module comprises:
a first camera module that collects an object and acquires a first camera image; and
a distance sensor module that acquires distance depth information by using light, an
Wherein, in the first acquisition process, the processor acquires the reference camera image based on the first camera image and the distance depth information by controlling the first camera module and the distance sensor module.
In an electronic device, wherein the imaging module comprises:
a first camera module that collects an object and acquires a first camera image; and
a second camera module that acquires an object and acquires a second camera image, and
wherein, in the first acquisition process, the processor acquires the reference camera image based on the first camera image and the second camera image by controlling the first camera module and the second camera module.
In an electronic device, wherein the imaging module includes a first camera module that captures an object and acquires a first camera image, an
Wherein, in the first acquisition process, the processor acquires the reference camera image based on the first camera image by controlling the first camera module.
In the electronic device, an input module for receiving an operation input of a user is further included.
According to the present disclosure, a method for controlling an electronic device includes: an imaging module that takes a photograph of a subject and obtains a camera image; and a processor that controls the imaging module and acquires the camera image, and the method includes:
the processor performs a first acquisition process for acquiring a reference camera image, which includes an image of the subject,
the processor performs a detection process for detecting appearance information related to the appearance of the object from the reference camera image,
the processor executing an analysis process for analyzing the detected appearance information of the object and determining the content of a second acquisition process for acquiring a final camera image including the object based on the analysis result, an
A feedback process is performed by the processor for feeding back the content of the second acquisition process determined by the analysis process to a user operating the electronic device.
According to the present disclosure, a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements a method for controlling an electronic device, and the method comprises:
The processor performs a first acquisition process for acquiring a reference camera image, which includes an image of the subject,
the processor performs a detection process for detecting appearance information related to the appearance of the object from the reference camera image,
the processor executing an analysis process for analyzing the detected appearance information of the object and determining the content of a second acquisition process for acquiring a final camera image including the object based on the analysis result, an
A feedback process is performed by the processor for feeding back the content of the second acquisition process determined by the analysis process to a user operating the electronic device.
Drawings
These and/or other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following description, taken in conjunction with the accompanying drawings, wherein:
fig. 1 is a diagram showing an example of an arrangement of an electronic device 100 and an object 101 according to an embodiment of the present invention;
fig. 2 is a diagram showing an example of the configuration of the electronic apparatus 100 shown in fig. 1;
fig. 3A is a diagram illustrating another example of the imaging module 102 of the electronic device 100 shown in fig. 1 and 2;
Fig. 3B is a diagram illustrating another example of the imaging module 102 of the electronic device 100 shown in fig. 1 and 2;
fig. 4 is a diagram showing an example of an overall process flow performed when the electronic apparatus 100 shown in fig. 1 and 2 takes a photograph;
fig. 5A is a diagram showing an example of a process of applying user intention information to an acquired camera image;
fig. 5B is a diagram showing another example of a process of applying user intention information to an acquired camera image;
fig. 5C is a diagram showing still another example of a process of applying user intention information to an acquired camera image;
fig. 6 is a diagram showing a specific example of the flow of the detection process (step S2) shown in fig. 4;
fig. 7 is a diagram showing an example of information detected from the camera image in the detection process (step S2) shown in fig. 4;
fig. 8A is a diagram showing a specific example of a flow of applying the rule-based analysis method to the analysis process (step S3) shown in fig. 4;
fig. 8B is a diagram showing a specific example of a flow of applying the analysis method by machine learning to the analysis process (step S3) shown in fig. 4;
fig. 9 is a diagram showing an example of a specific flow of the analysis process (step S31) of the taste of the object shown in fig. 8A;
Fig. 10A is a diagram showing an example of extracting taste of an object from appearance information of the object;
fig. 10B is a diagram showing another example of extracting taste of an object from appearance information of the object;
fig. 11 is a diagram showing an example of a specific flow of determination of the second acquisition process (step S32) shown in fig. 8A;
fig. 12A is a diagram showing an example of the output of the feedback process (step S4) shown in fig. 4; and
fig. 12B is a diagram showing another example of the output of the feedback process (step S4) shown in fig. 4.
Detailed Description
Embodiments of the present disclosure will be described in detail, and examples of the embodiments will be illustrated in the accompanying drawings. Throughout the specification, identical or similar elements and elements having identical or similar functions are denoted by identical reference numerals. The embodiments described herein with reference to the drawings are illustrative and are intended to be illustrative of the disclosure, but should not be construed as limiting the disclosure.
Fig. 1 is a diagram showing an example of an arrangement of an electronic device 100 and an object 101 according to an embodiment of the present invention. Fig. 2 is a diagram showing an example of the structure of the electronic apparatus 100 shown in fig. 1. Fig. 3A is a diagram illustrating another example of the imaging module 102 of the electronic device 100 illustrated in fig. 1 and 2. Fig. 3B is a diagram illustrating another example of the imaging module 102 of the electronic device 100 illustrated in fig. 1 and 2.
As shown in fig. 1 and 2, for example, the electronic apparatus 100 includes a first camera module 10, a distance sensor module 20, and an image signal processor 30, and the image signal processor 30 controls the first camera module 10 and the distance sensor module 20 and processes camera image data acquired from the camera module 10. For example, reference numeral 101 in fig. 1 depicts an object that is a person or a plurality of persons.
In the example of fig. 1 and 2, the imaging module 102 is constituted by the first camera module 10 and the distance sensor module 20. The imaging module 102 is defined as a module for acquiring at least one person (object 101) and acquiring a camera image.
As shown in fig. 1, the first camera module 10 includes a main lens 10a capable of focusing on an object, a main image sensor 10b detecting an image input via the main lens 10a, and a main image sensor driver 10c driving the main image sensor 10b, for example.
Further, as shown in fig. 2, the first camera module 10 includes, for example, a gyro sensor 10d for angular velocity and acceleration of the camera module 10, a focus & OIS actuator 10f that actuates the main lens 10a, and a focus & OIS driver 10e that drives the focus & OIS driver 10 f.
The first camera module acquires a first camera image (main camera image) of the object 101 of fig. 1, for example.
As shown in fig. 2, the distance sensor module 20 includes, for example, a time of flight (ToF) lens 20a, a distance sensor 20b that detects reflected light input via the ToF lens 20a, a distance sensor driver 20c that drives the distance sensor 20b, and a projector 20d that outputs pulsed light.
The distance sensor module 20 acquires distance depth information of the object 101 by using light. In particular, the distance sensor module 20 is configured to take time of flight (ToF depth value) depth information (ToF depth value) acquired by, for example, emitting pulsed light to the subject 101 and detecting reflected light from the subject 101 as distance depth information.
Based on the main camera image obtained by the first camera module 10 and the ToF depth information (distance depth information) obtained by the distance sensor module 20, the image signal processor 30 controls, for example, the first camera module 10 and the distance sensor module 20 to acquire a camera image, which is the main camera image.
Further, as shown in fig. 2, for example, the electronic device 100 includes a global navigation satellite system (global navigation satellite system, GNSS) module 40, a wireless communication module 41, a codec 42, a speaker 43, a microphone 44, a display module 45, an input module 46, an inertial navigation unit (inertial navigation unit, IMU) 47, a main processor 48, and a memory 49.
For example, the GNSS module 40 measures the current location of the electronic device 100.
For example, the wireless communication module 41 performs wireless communication with the internet (server 200).
The wireless communication module 41 receives data of the machine-learned reasoner from the server 200. The reasoner data is then stored, for example, in memory 49.
For example, as shown in fig. 2, the codec 42 bidirectionally performs encoding and decoding using a predetermined encoding/decoding method.
For example, the speaker 43 outputs sound based on the sound data decoded by the codec 42.
For example, the microphone 44 outputs sound data to the codec 42 based on the input sound.
The display module 45 displays predefined information. The display module 45 is, for example, a touch panel.
The input module 46 receives an input of a user (operation of the user). The input module 46 is included in, for example, a touch panel.
The IMU47 detects, for example, angular velocity and acceleration of the electronic device 100.
The main processor 48 controls a Global Navigation Satellite System (GNSS) module 40, a wireless communication module 41, a codec 42, a speaker 43, a microphone 44, a display module 45, an input module 46 and an IMU47.
In the example of fig. 2, the processor 103 is composed of the image signal processor 30 and the main processor 48. The processor 103 is defined as a controller for controlling the imaging module 102 and acquiring a camera image.
For example, the processor 103 controls the first camera module 10 and the distance sensor module 20 in the first acquisition process to acquire a reference camera image based on the first camera image and the distance depth information.
The memory 49 stores the following programs and data: the image signal processor 30 is used to control programs and data required for the camera module 10 and the distance sensor module 20, acquired image data, and programs (including machine learning reasoning data) and data required for the main processor 48 to control the electronic device 100.
For example, the memory 49 includes a computer-readable storage medium storing a computer program, wherein the computer program implements a method for controlling the electronic device 100 when the computer program is executed by the main processor 48 (processor 103). For example, the method includes: the processor performs a first acquisition process for acquiring a reference camera image including an image of an object by controlling the imaging module 102, the processor 103 performs a detection process for detecting appearance information related to the appearance of the object from the reference camera image, the processor 103 performs an analysis process for analyzing the detected appearance information of the object, and determines the content of a second acquisition process for acquiring a final camera image including the object based on the analysis result, and the processor 103 performs a feedback process for feeding back the content of the second acquisition process determined by the analysis process to a user operating the electronic device.
In the present embodiment, the electronic device 100 having the above-described configuration is a mobile phone such as a smart phone, but may be other types of electronic devices (e.g., a tablet computer and a PDA) including the imaging module 102.
In addition to the examples shown in fig. 1 and 2, for example, as shown in fig. 3A, the imaging module 102 includes a first camera module 10X that acquires the subject 101 and acquires a first camera image, and a second camera module 10Y that acquires the subject 101 and acquires a second camera image.
In the case of the example shown in fig. 3A, the processor 103 controls the first camera module 10X and the second module 10Y in the first acquisition process, and acquires a reference camera image based on the first camera image and the second camera image. Then, distance depth information is calculated from the parallax between the first camera module 10X and the second camera module 10Y.
Further, in addition to the examples shown in fig. 1 and 2, for example, as shown in fig. 3B, the imaging module 102 may include a first camera module 10 that acquires the subject 101 and acquires a first camera image.
In the case of the example shown in fig. 3B, the processor 103 controls the first camera module 10 in the first acquisition process, and acquires a reference camera image based on the first camera image. Then, distance depth information is calculated based on the first camera image acquired by the first camera module 10.
Next, an example of a method for controlling the electronic apparatus 100 having the above-described configuration and function will be described.
Fig. 4 is a diagram showing an example of an overall process flow performed when the electronic apparatus 100 shown in fig. 1 and 2 takes a photograph. Fig. 5A is a diagram showing an example of a procedure for applying user intention information to an acquired camera image. Fig. 5B is a diagram showing another example of a process of applying user intention information to an acquired camera image. Fig. 5C is a diagram showing still another example of a process of applying user intention information to an acquired camera image.
For example, as shown in fig. 4, first, the processor 103 controls the imaging module 102 to execute a first acquisition process for acquiring a reference camera image including an image of an object (step S1).
Here, as described above, the imaging module 102 also acquires distance depth information (depth image) by photographing the subject. Thus, in the first acquisition process (step S1), the processor 103 may acquire the three-dimensional shape of the clothing of the subject based on, for example, the distance depth information.
Further, for example, in the first acquisition process (step S1), in response to an operation input by the user, the processor 103 acquires user intention information for selecting a predetermined person and article from an object included in the reference camera image with respect to the reference camera image, the particular article being imaged (fig. 5A). In the example of fig. 5A, a plurality of persons are selected.
Further, in the first acquisition process (step S1), in response to an operation input by the user, the processor 103 acquires user intention information for selecting a specific item from an object included in the reference camera image with respect to the reference camera image (fig. 5B).
Further, in the first acquisition process (step S1), in response to an operation input by the user, the processor 103 acquires user intention information for specifying a target to be excluded from the reference camera image with respect to the reference camera image (fig. 5C).
After that, the processor 103 performs a detection process of detecting appearance information (fashion information) related to the appearance of the object from the reference camera image (step S2).
The appearance information (fashion information) includes any one of coordination and color scheme of clothing of the subject 101, hairstyle, skin color, and makeup of the subject 101.
Further, in the detection process (step S2), the processor 103 also detects additional information from outside the electronic apparatus 100.
Further, in the detection process (step S2), the processor 103 detects article information about an article overlapped with the object from the reference camera image.
Further, in the detection process (step S2), the processor 103 also detects object information about an object separated from the object from the reference camera image.
Thereafter, the processor 103 analyzes at least the extracted appearance information of the object, and based on the analysis result, performs an analysis process of analyzing the content of a second acquisition process for acquiring a final camera image including the object (step S3).
In the analysis process (step S3), the processor 103 may determine the content of the second acquisition process based on the analysis result and the user intention information.
For example, the second acquisition process includes at least one of a framing process, a filtering process, and an image processing process.
The framing process sets a recommended composition for photographing the subject to acquire a final camera image.
The filtering process applies an image filter to the reference camera image or camera image at the time of capture to obtain a final camera image.
The image processing procedure applies image processing to the reference camera image or camera image at the time of photographing to acquire a final camera image.
In addition, in the analysis process (step S3), for example, the processor 103 may analyze the appearance information including the article information and determine the content of the second acquisition process based on the analysis result.
Further, in the analysis process (step S3), for example, the processor 103 may analyze the external information and the object information and determine the content of the second acquisition process based on the analysis result.
After that, the processor 103 executes a feedback process for feeding back the content of the second acquisition process determined by the analysis process to the user operating the electronic device (step S4).
In the feedback process (step S4), the processor 103 may feed back the content of the second acquisition process to the user through a display output or a voice output.
For example, the processor 103 may display the content of the second acquisition process on the display module 45 shown in fig. 2 in the feedback process (step S4).
On the other hand, for example, in the feedback process (step S4), the processor 103 may output the content of the second acquisition process from the speaker 43 shown in fig. 2.
After the feedback process (step S4), the processor 103 may automatically acquire the object to acquire a final camera image including the object 101 based on the content of the second acquisition process.
Further, for example, after the feedback process (step S4), the processor 103 may automatically perform image processing on the reference camera image based on the content of the second acquisition process to acquire a final camera image including the subject.
Further, for example, after the feedback process (step S4), in response to an operation input by the user, the processor 103 may take a photograph of the subject 101 based on the content of the second acquisition process to acquire a final camera image including the subject 101.
Further, after the feedback process (step S4), for example, in response to an operation input by the user, the processor 103 may acquire a final camera image including the object 101 by performing image processing on the reference camera image based on the content of the second acquisition process.
Here, a specific example of the flow of the detection process (step S2) shown in fig. 4 described above will be described.
Fig. 6 is a diagram showing a specific example of the flow of the detection process (step S2) shown in fig. 4. Fig. 7 is a diagram showing an example of information detected from a camera image in the detection process (step S2) shown in fig. 4.
First, for example, as shown in fig. 6, in the detection process S2, the processor 103 detects all objects of the reference camera image (step S21).
Next, the processor 103 makes a determination as to whether the valid distance depth information is available (step S22).
Then, when the valid distance depth information is available, the processor 103 detects the three-dimensional shape of the object and proceeds to the next step S24 (step S23). On the other hand, when the valid distance depth information is not available, the processor 103 proceeds to the next step S24.
Next, the processor 103 detects the appearance information (fig. 7) from the reference camera image, and ends the detection process (step S24).
Further, in step S24, the processor 103 may also detect item information, background information, and additional information from the reference camera image.
The appearance information includes information about, for example, the following examples: (1) Hair, skin, lips, eyes, etc. of the subject; (2) the shape of the eyebrows, hairstyles, etc.; (3) facial expression of a person; (4) a gesture of an object and (5) a gesture of an object.
The item information includes information about the following examples: (1) the color of apparel and items; (2) The shape of the garment and the article, and (3) the material estimated from the texture of the garment and the article.
The context information includes information about, for example, the following examples: (1) Objects other than people reflected in the background and (2) background edges and saliency maps.
The additional information includes, for example, the season in which the electronic device 100 is located, weather, area, location, etc.
Next, a specific example of the flow of the analysis process (step S3) shown in fig. 4 will be described.
Fig. 8A is a diagram showing a specific example of a flow of applying the rule-based analysis method to the analysis process (step S3) shown in fig. 4. Fig. 8B is a diagram showing a specific example of a flow of applying the analysis method by machine learning to the analysis process (step S3) shown in fig. 4.
[ examples of applying rule-based analysis methods ]
First, as shown in fig. 8A, in the analysis process (step S3), the processor 103 performs a process of analyzing the taste (preference/emotion, etc.) of the object based on the detection result of the appearance information of the object (step S31).
Next, the processor 103 determines the content of the second acquisition process for acquiring the final camera image based on the analyzed taste of the subject, and ends the analysis process (step S32).
In the analysis process (step S3), the processor 103 may analyze the external information and the additional information to determine the content of the second acquisition process based on the analysis result. Further, the processor 103 may analyze the external information and the background information to determine the content of the second acquisition process based on the analysis result.
[ examples of analysis methods applying machine learning ]
For example, as shown in fig. 8B, in the analysis process, the processor 103 determines the content of the second acquisition process for acquiring the final camera image using the inference engine of the pre-machine learning based on the detection result of the appearance information of the object, and ends the analysis process (step S33).
The reasoner may be recorded in the system (memory 49 in fig. 2) of the electronic device 100 from the beginning, or may be downloaded from the server 200 in fig. 2 described above.
Next, an example of a specific flow of the analysis process (step S31) of the taste of the object shown in fig. 8A will be described.
Fig. 9 is a diagram showing an example of a specific flow of the analysis process (step S31) of the taste of the object shown in fig. 8A. Fig. 10A is a diagram showing an example of extracting taste of an object from appearance information of the object. Fig. 10B is a diagram showing another example of extracting the taste of the object from the appearance information of the object.
First, as shown in fig. 9, in the analysis process of the taste (preference/emotion) of the subject, the processor 103 makes a determination as to whether valid user intention information is available (step S311).
Then, when valid user intention information is not available, the processor 103 does not designate a main object in the reference (step S312).
On the other hand, when valid user intention information is available, the processor 103 sets a person or an item specified by the user intention information as a main object (step S313).
Next, the processor 103 performs analysis of the meaning of the object/article in, for example, examples (1) to (6) below (step S314).
(1) The processor 103 analyzes the personality based on the subject's hairstyle (e.g., forehead or short hair: active/male, long hair: gentle/female, etc.).
(2) The processor 103 analyzes the makeup tendency of the subject's face.
(3) The processor 103 analyzes the subject's personality/emotion based on the color and shape of the subject's clothing/fashion item (clothing with many ruffles: lovely, bright red blouse: enthusiasm, etc.).
(4) The processor 103 analyzes the subject's facial expression (no expression: cool, smile or blink: lovely).
(5) The processor 103 analyzes the posture of the subject (standing posture: serious, lifting hand fist posture: bright atmosphere, etc.).
(6) The processor 103 analyzes the gesture of the object, meaning that it can be read from a portion of the hand, position, orientation and shape.
Next, the processor 103 performs, for example, a combination analysis in the following embodiments (1) to (3) (step S315).
(1) The processor 103 is used for analyzing the combination of the color, shape of the clothing of the subject and the color, shape of the fashion item.
(2) The processor 103 determines a combination pattern with the appearance information of the object by determining that the background is simple, complex or reflected in the reference camera image.
(3) The processor 103 determines a combination pattern of information obtained from, for example, seasons, weather, regions, specific places, and the like other than the image and appearance information of the object.
Next, the processor 103 performs analysis of the taste of the subject based on the above analysis result (step S316).
For example, as shown in fig. 10A, in the rule-based analysis method, the processor 103 extracts the taste of the object (male static impression) from the appearance information (colors and shapes of clothing and articles) of the object 101.
On the other hand, for example, as shown in fig. 10B, the processor 103 extracts the taste (girl/popular dynamic impression) of the object from the appearance information (the color and shape of clothing) of the object.
In the analysis method by machine learning shown in fig. 8B, the processor 103 may infer the taste of the object by the above-described inference apparatus based on the appearance information of the object.
Next, an example of a specific flow of determination of the second acquisition process (step S32) shown in fig. 8A will be described.
Fig. 11 is a diagram showing an example of a specific flow of determination of the second acquisition process (step S32) shown in fig. 8A.
First, as shown in fig. 11, the processor 103 classifies the corresponding situation using a classifier based on the analyzed taste and appearance information of the object (step S321).
Next, the processor 103 determines a second acquisition process corresponding to the classified cases 1 to n (steps S322-1 to S322-n).
Specific examples of the rule based on the taste of the object include the following.
[ case where the second acquisition procedure is a framing procedure ]
If the subject has a serious appearance or a very cool image, it is recommended that the subject be arranged in an orderly manner according to an orderly composition rule such as a dichotomy.
If the subject has a dynamic image or a bright appearance, it is recommended to arrange the composition according to a moving composition rule such as diagonal composition.
[ case where the second acquisition procedure is a filtering procedure ]
If the subject appears to be lovely, a filter is applied to make it a light pink color.
If the object looks like a popular image, a filter is applied to change it to a moving hue.
If the object appears cool and has a calm image, a filter is applied to change the color to a calm color.
[ case where the second acquisition procedure is an image processing procedure ]
If the object is a fashion with a loving image, the loving heart is used for processing.
If the object is cool and has a calm image, the pseudo-classic texture is superimposed on the image.
As the additional rule, not only a rule based on the taste of the object but also the appearance information itself may be used.
When an emphasis color is detected from the subject, an emphasis item is placed based on a composition rule such as a dichotomy.
The composition is arranged according to composition rules (e.g., a trisection of color boundaries).
A color filter is applied according to the color of the garment worn by the subject.
Next, an example of the output of the feedback process (step S4) shown in fig. 4 will be described.
Fig. 12A is a diagram showing an example of the output of the feedback process (step S4) shown in fig. 4. Fig. 12B is a diagram showing another example of the output of the feedback process (step S4) shown in fig. 4.
As described above, in the feedback process, the processor 103 may feed back the content of the second acquisition process to the user through the display output or the voice output.
For example, as shown in fig. 12A, the processor 103 may display the content of the second acquisition process (composition change G1) on the display module 45 in the feedback process.
Further, for example, as shown in fig. 12B, the processor 103 may display the content of the second acquisition process (thumbnail G2 of the image processing example/filter example) on the display module 45 in the feedback process.
After the feedback process (step S4), the processor 103 acquires a final camera image including the object 101 based on the content of the second acquisition process.
As described above, according to the electronic apparatus of the present invention, when the user and the subject are different, the taste of the subject can be fed back into photography without listening to the subject. Thus, even if the user does not know the object at all, it is possible to obtain a photograph that the object likes.
Further, according to the electronic apparatus of the present invention, when the user and the object are the same, the electronic apparatus acquires the taste of the object as the user. Therefore, the user does not need to perform an operation reflecting the taste of the photograph.
In describing embodiments of the present disclosure, it should be understood that terms such as "central," "longitudinal," "transverse," "length," "width," "thickness," "upper," "lower," "front," "rear," "back," "left," "right," "vertical," "horizontal," "top," "bottom," "interior," "exterior," "clockwise," and "counterclockwise" should be interpreted as referring to the directions or locations depicted or shown in the drawings at the time of discussion. These related terms are only used to simplify the description of the present disclosure and do not indicate or imply that the devices or elements referred to must have a particular orientation or must be constructed or operated in a particular orientation. Accordingly, these terms should not be construed as limiting the present disclosure.
Furthermore, terms such as "first" and "second" are used herein for descriptive purposes and are not intended to indicate or imply relative importance or significance or the number of technical features indicated. Thus, features defined as "first" and "second" may include one or more of the features. In the specification of the present disclosure, unless otherwise specified, "a plurality" means "two or more than two".
In describing embodiments of the present disclosure, unless otherwise indicated or limited, terms "mounted," "connected," "coupled," and the like are used broadly and may be, for example, fixed, removable, or integral, or may be a mechanical or electrical connection, or may be a direct or indirect connection via an intermediate structure, or may be an internal communication of two elements as would be understood by one of ordinary skill in the art in view of the particular circumstances.
In embodiments of the present disclosure, unless specified or limited otherwise, structures that "on" or "under" a first feature may include embodiments in which the first feature is in direct contact with the second feature, as well as embodiments in which the first feature and the second feature are not in direct contact with each other, but are contacted by additional features formed therebetween. Furthermore, a first feature "on (touching)", "over (not touching) or" over (contactable by) a second feature may include embodiments where the first feature is "on (touching) the second feature," over (not touching) or "over (contactable by) the second feature, orthogonally or obliquely, or simply meaning that the first feature is at a higher elevation than the second feature; while a first feature that is "under", "directly under" or "at the bottom of" a second feature may include embodiments in which the first feature is "under", "directly under" or "at the bottom of" the second feature, either orthogonally or obliquely, or simply meaning that the first feature is at a lower elevation than the second feature.
The above illustration provides various embodiments and examples to implement different structures of the present disclosure. To simplify the present disclosure, certain elements and arrangements are described above. However, these elements and arrangements are merely examples and are not intended to limit the present disclosure. Further, reference numerals and/or drawing letters may be repeated in the various examples of the disclosure. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations. In addition, the present disclosure provides examples of different processes and materials. However, those skilled in the art will appreciate that other processes and/or materials may be used.
Reference throughout this specification to "an embodiment," "some embodiments," "an example embodiment," "an example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic associated with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above-identified phrases in various places throughout this specification are not necessarily all referring to the same embodiment or example of the disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in the flow diagrams or otherwise described herein can be understood as comprising one or more modules, segments, or portions of code that comprise executable instructions for implementing specific logical functions or steps in the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions can be implemented in an order other than that shown or discussed, including in substantially the same order or in an opposite order, as would be understood by those skilled in the art.
The logic and/or steps described elsewhere herein or shown in a flowchart, for example, a particular sequence of executable instructions for implementing the logic function, may be embodied in any computer readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, including a processor, or other system that can fetch the instructions from the instruction execution system, apparatus, or device that executes the instructions. For the purposes of this description, a "computer-readable medium" can be any apparatus that can be used by or in connection with an instruction execution system, apparatus, or device that can be adapted to contain, store, communicate, propagate, or transport the program. More specific examples of the computer-readable medium include, but are not limited to: an electronic connection (electronic device) with one or more wires, a portable computer housing (magnetic device), a random access memory (random access memory, RAM), a Read Only Memory (ROM), an erasable programmable read only memory (erasable programmable read-only memory, EPROM or flash memory), a fiber optic device, and a portable compact disc read only memory (compact disk read-only memory, CDROM). Furthermore, the computer readable medium may even be paper or other suitable medium upon which the program can be printed, as the paper or other suitable medium may be optically scanned, then compiled, decoded or otherwise processed in a suitable manner, and then stored in a computer memory, for example, when the program is required to be electronically obtained.
It should be understood that each portion of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, the steps or methods may be implemented by one or a combination of the following techniques known in the art, similar to another embodiment: discrete logic circuits having logic gates for implementing data signal logic functions, application specific integrated circuits having logic gates in appropriate combinations, programmable gate arrays (programmable gate array, PGA), field programmable gate arrays (field programmable gate array, FPGA), and the like.
Those skilled in the art will appreciate that all or part of the steps in the above described exemplary methods of the present disclosure may be implemented using program command related hardware. The program may be stored in a computer-readable storage medium when run on a computer, the program comprising one or a combination of the steps of the method embodiments of the present disclosure.
Furthermore, each functional unit of embodiments of the present disclosure may be integrated in a processing module, or the units may be physically separated, or two or more units are integrated in a processing module. The integrated module can be implemented in hardware or in the form of a software functional module. When the integrated module is implemented in the form of a software functional module and sold or used as a stand-alone product, the integrated module may be stored in a computer-readable storage medium.
The storage medium may be a read-only memory, a magnetic disk, a CD, or the like.
Although embodiments of the present disclosure have been shown and described, it will be understood by those skilled in the art that these embodiments are illustrative and not to be construed as limiting the present disclosure, and that changes, modifications, substitutions, and alterations may be made to the embodiments without departing from the scope of the disclosure.

Claims (20)

1. An electronic device, comprising:
the imaging module is used for taking a picture of a subject and acquiring a camera image; and
a processor that controls the imaging module and acquires the camera image, wherein,
the processor performs a first acquisition process for acquiring a reference camera image, the reference camera image comprising an image of the object,
the processor performs a detection process for detecting appearance information related to the appearance of the object from the reference camera image,
the processor performs an analysis process for analyzing the detected appearance information of the object and determines the content of a second acquisition process for acquiring a final camera image including the object based on the analysis result, an
The processor performs a feedback process for feeding back the content of the second acquisition process determined by the analysis process to a user operating the electronic device.
2. The electronic device according to claim 1,
wherein the second acquisition process includes at least one of a framing process, a filtering process, and an image processing process,
wherein the framing process sets a recommended composition for photographing the subject to acquire the final camera image,
wherein the filtering process applies an image filter to the reference camera image or the camera image at the time of photographing to acquire the final camera image, and
wherein the image processing procedure applies image processing to the reference camera image or the camera image at the time of photographing to acquire the final camera image.
3. The electronic device of claim 1, wherein the processor feeds back the content of the second acquisition process to the user in the feedback process via a display output or a voice output.
4. The electronic device according to claim 1,
wherein, in the detection process, the processor detects article information about an article overlapping the object based on the reference camera image, and
Wherein in the analysis process, the processor analyzes the appearance information including the article information, and determines the content of the second acquisition process based on the analysis result.
5. The electronic device according to claim 4,
wherein, in the detection process, the processor also detects background information related to the background separated from the object from the reference camera image, and
and in the analysis process, the processor analyzes the appearance information and the background information and determines the content of the second acquisition process based on the analysis result.
6. An electronic device according to claim 5,
wherein, in the detection process, the processor also detects additional information from outside the electronic device, and
wherein in the analysis process, the processor analyzes the appearance information and the additional information, and determines the content of the second acquisition process based on the analysis result.
7. The electronic device according to claim 1,
wherein, in the first acquisition process, the processor acquires user intention information in response to user operation input, an
Wherein in the analysis process, the processor determines the content of the second acquisition process based on the analysis result and the user intent information.
8. The electronic device according to claim 1,
wherein the imaging module is designed to acquire distance depth information also by imaging the object, and
wherein, in the first acquisition process, the processor acquires a three-dimensional shape of the clothing of the subject based on the distance-depth information.
9. The electronic device according to claim 1,
wherein in the analyzing process, the processor performs a process for analyzing the taste of the object based on the detection result of the appearance information of the object, and
wherein the processor performs a process of determining the content of the second acquisition process for acquiring the final camera image based on the analyzed taste of the object.
10. The electronic device according to claim 1,
wherein in the analysis process, the processor performs a process of determining the content of the second acquisition process for acquiring the final camera image based on the detection result of the appearance information of the object by using a machine-learned reasoner in advance.
11. The electronic device according to claim 1,
wherein after the feedback process, the processor automatically captures the object by controlling the imaging module based on the content of the second acquisition process to acquire the final camera image including the object.
12. The electronic device according to claim 1,
wherein after the feedback process, the processor acquires the final camera image including the object by automatically performing image processing on the reference camera image based on the content of the second acquisition process.
13. The electronic device according to claim 1,
wherein, after the feedback process, the processor acquires the final camera image including the object by controlling the imaging module to capture the object based on the content of the second acquisition process in response to an operation input of the user.
14. The electronic device according to claim 1,
wherein, after the feedback process, the processor acquires the final camera image including the object by performing image processing on the reference camera image based on the content of the second acquisition process in response to an operation input of the user.
15. The electronic device according to claim 1,
wherein, imaging module includes:
a first camera module that captures the object and acquires a first camera image; and
a distance sensor module that acquires distance depth information by using light, and
wherein, in the first acquisition process, the processor acquires the reference camera image based on the first camera image and the distance depth information by controlling the first camera module and the distance sensor module.
16. The electronic device according to claim 1,
wherein, imaging module includes:
a first camera module that captures the object and acquires a first camera image; and
a second camera module that acquires the object and acquires a second camera image, an
Wherein, in the first acquisition process, the processor acquires the reference camera image based on the first camera image and the second camera image by controlling the first camera module and the second camera module.
17. The electronic device according to claim 1,
Wherein the imaging module comprises a first camera module that captures the object and acquires a first camera image, an
Wherein, in the first acquisition process, the processor acquires the reference camera image based on the first camera image by controlling the first camera module.
18. The electronic device of claim 1, further comprising an input module that receives an operational input of the user.
19. A method of controlling an electronic device, the electronic device comprising: the imaging module is used for taking a picture of a subject and acquiring a camera image; and a processor that controls the imaging module and acquires the camera image, the method comprising:
the processor performs a first acquisition process for acquiring a reference camera image, including an image of the object,
the processor performs a detection process for detecting appearance information related to the appearance of the object from the reference camera image,
the processor performs an analysis process for analyzing the detected appearance information of the object and determines the content of a second acquisition process for acquiring a final camera image including the object based on the analysis result, an
The processor performs a feedback process for feeding back the content of the second acquisition process determined by the analysis process to a user operating the electronic device.
20. A computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements a method for controlling an electronic device, and the method comprises:
the processor performs a first acquisition process for acquiring a reference camera image, including an image of the object,
the processor performs a detection process for detecting appearance information about the appearance of the object from the reference camera image,
the processor performs an analysis process for analyzing the detected appearance information of the object and determines the content of a second acquisition process for acquiring a final camera image including the object based on the analysis result, an
The processor performs a feedback process for feeding back the content of the second acquisition process determined by the analysis process to a user operating the electronic device.
CN202080106203.2A 2020-10-13 2020-10-13 Electronic device, method of controlling electronic device, and computer-readable storage medium Pending CN116529779A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/120677 WO2022077229A1 (en) 2020-10-13 2020-10-13 Electric device, method of controlling electric device, and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116529779A true CN116529779A (en) 2023-08-01

Family

ID=81207484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080106203.2A Pending CN116529779A (en) 2020-10-13 2020-10-13 Electronic device, method of controlling electronic device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN116529779A (en)
WO (1) WO2022077229A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103220466B (en) * 2013-03-27 2016-08-24 华为终端有限公司 The output intent of picture and device
CN110348419B (en) * 2019-07-18 2023-03-24 三星电子(中国)研发中心 Method and device for photographing
CN110458117A (en) * 2019-08-14 2019-11-15 极智视觉科技(深圳)有限公司 A kind of portraiture photography posture recommended method
CN111614897B (en) * 2020-05-13 2021-08-10 南京邮电大学 Intelligent photographing method based on multi-dimensional driving of user preference

Also Published As

Publication number Publication date
WO2022077229A1 (en) 2022-04-21

Similar Documents

Publication Publication Date Title
KR102661019B1 (en) Electronic device providing image including 3d avatar in which motion of face is reflected by using 3d avatar corresponding to face and method for operating thefeof
US11509817B2 (en) Autonomous media capturing
US10264177B2 (en) Methods and systems to obtain desired self-pictures with an image capture device
US9607138B1 (en) User authentication and verification through video analysis
CN104125396B (en) Image capturing method and device
JP5863423B2 (en) Information processing apparatus, information processing method, and program
JP6850723B2 (en) Facial expression identification system, facial expression identification method and facial expression identification program
US10617301B2 (en) Information processing device and information processing method
US11113511B2 (en) Makeup evaluation system and operating method thereof
CN106295566A (en) Facial expression recognizing method and device
EP3579176A1 (en) Makeup evaluation system and operation method thereof
US11403789B2 (en) Method and electronic device for processing images
CN105069180A (en) Hair style design method and system
KR20200132569A (en) Device for automatically photographing a photo or a video with respect to a specific moment and method for operating the same
CN112000221A (en) Method for automatically detecting skin, method for automatically guiding skin care and makeup and terminal
JP6109288B2 (en) Information processing apparatus, information processing method, and program
CN108933891A (en) Photographic method, terminal and system
CN116529779A (en) Electronic device, method of controlling electronic device, and computer-readable storage medium
CN109508587A (en) Biological information analytical equipment and its bottom adornment analysis method
CN114021022A (en) Dressing information acquisition method and device, vehicle and storage medium
CN106407421A (en) A dress-up matching evaluation method and device
KR20220004156A (en) Car cabin interaction method, device and vehicle based on digital human
Vineetha et al. Face expression detection using Microsoft Kinect with the help of artificial neural network
CN117575636B (en) Intelligent mirror control method and system based on video processing
CN109255674A (en) A kind of examination adornment data processing system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination