CN116868577A - Method for suggesting shooting position of electronic equipment and electronic equipment - Google Patents

Method for suggesting shooting position of electronic equipment and electronic equipment Download PDF

Info

Publication number
CN116868577A
CN116868577A CN202180094209.7A CN202180094209A CN116868577A CN 116868577 A CN116868577 A CN 116868577A CN 202180094209 A CN202180094209 A CN 202180094209A CN 116868577 A CN116868577 A CN 116868577A
Authority
CN
China
Prior art keywords
camera
image
guidance
electronic device
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180094209.7A
Other languages
Chinese (zh)
Inventor
三浦照恭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN116868577A publication Critical patent/CN116868577A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

A method of suggesting a photographing position of an electronic device is disclosed. An electronic device has a camera and a display for displaying images captured by the camera in real time. The method includes acquiring an image captured by a camera, acquiring a depth image corresponding to the image, detecting a plurality of objects in the depth image and selecting a main object and a sub object from the plurality of objects, determining a guiding direction for the mobile electronic device based on object information related to the main object and the sub object, and displaying a guiding UI indicating the guiding direction on a display using the image.

Description

Method for suggesting shooting position of electronic equipment and electronic equipment
Technical Field
The present disclosure relates to a method for suggesting a photographing position of an electronic device, and an electronic device implementing the method.
Background
In recent years, images captured by electronic devices with cameras such as smartphones are often uploaded to social networking services (social network service, SNS), or the like. When photographing an object such as a dish on a table using a smart phone, a user preferably photographs an image whose composition has a sense of distance or a sense of depth, and also photographs an image capable of emphasizing the sense of distance by applying a background process (background process).
It is difficult for a user who is unfamiliar with capturing an image to determine an appropriate photographing position for photographing an image having a sense of distance. Although smartphones may be expected to process images captured by a user to convert the captured images into images having a sense of distance. However, it is not easy to change the composition of the image captured by the user to one having a sense of distance.
Disclosure of Invention
The present disclosure is directed to solving at least one of the above-mentioned technical problems. Accordingly, there is a need in the present disclosure to provide a method of suggesting a photographing position of an electronic device and an electronic device implementing such a method.
The present disclosure provides a method of suggesting a photographing position of an electronic device having a camera and a display for displaying an image captured by the camera in real time. The method may include:
acquiring an image captured by a camera;
acquiring a depth image corresponding to the image;
detecting a plurality of objects in the depth image, and selecting a main object and an auxiliary object from the plurality of objects;
determining a guiding direction for the mobile electronic device based on object information related to the main object and the auxiliary object; and
a guidance UI indicating a guidance direction is displayed on the display using an image.
In some embodiments, the guidance UI may be overlaid on the image captured by the camera.
In some embodiments, determining a steering direction for the mobile electronic device based on object information related to the primary object and the secondary object may include:
determining whether the camera is sufficiently close to the primary object; and
if the camera is not sufficiently close to the main object, a direction along a Z-axis perpendicular to an X-Y plane parallel to the surface of the display is determined as a guiding direction.
In some embodiments, displaying a guidance UI indicating a guidance direction on a display using an image may include: a message for requesting the user of the electronic device to bring the camera close to the main object is displayed as a guidance UI.
In some embodiments, determining whether the camera is sufficiently close to the master object may include:
calculating a ratio between a size of a main object in an image and a size of the image; and
if the ratio is greater than a predetermined value, it is determined that the camera is sufficiently close to the subject.
In some embodiments, determining whether the camera is sufficiently close to the master object may include:
acquiring a distance between the camera and the main object according to a depth image, wherein the depth image is acquired by a distance sensor module; and
if the distance is shorter than the predetermined distance, it is determined that the camera is sufficiently close to the main object.
In some embodiments, determining a steering direction for the mobile electronic device based on object information related to the primary object and the secondary object may include:
determining whether the primary object is in a suitable position in an X-Y plane parallel to the surface of the display; and
if the main object is not located at a proper position in the X-Y plane, a direction in the X-Y plane in which the camera is guided to the proper position is determined as the guiding direction.
In some embodiments, displaying a guidance UI indicating a guidance direction on a display using an image may include: an arrow indicating the determined direction is displayed as a guidance UI.
In some embodiments, determining whether the primary object is located in a suitable position in an X-Y plane parallel to a surface of the display may include:
calculating a reference point of the main object;
if the reference point is located within the predetermined area, it is determined that the main object is located at an appropriate position in the X-Y plane.
In some embodiments, determining whether the primary object is located in a suitable position in an X-Y plane parallel to a surface of the display may include:
calculating a first reference point of the main object and a second reference point of the auxiliary object;
if the first reference point of the primary object is located within the predetermined area and the second reference point of the secondary object is located on the opposite side of the primary object, then it is determined that the primary object is located at a suitable position in the X-Y plane.
In some embodiments, the predetermined area may be an area centered on the next third of the grid points.
In some embodiments, determining a steering direction for the mobile electronic device based on object information related to the primary object and the secondary object may include:
determining whether the main object and the subsidiary object are properly arranged in a Z-axis direction perpendicular to an X-Y plane parallel to a surface of the display; and
if the main object and the sub object are not properly arranged in the Z-axis direction, a direction along the Y-axis to rotate around the main object to guide the camera to a position where the main object and the sub object are properly arranged in the Z-axis direction is determined as a guide direction.
In some embodiments, displaying a guidance UI indicating a guidance direction on a display using an image may include: an icon rotated about a virtual axis parallel to the Y axis, which passes through the main object, is displayed as a guide UI.
In some embodiments, determining whether the primary and secondary objects are properly arranged in a Z-axis direction perpendicular to an X-Y plane (the X-Y plane being parallel to a surface of the display) may include:
calculating an angle in an X-Z plane, the angle being an angle between a first line connecting the main object and the auxiliary object and a second line parallel to the X axis; and
if the angle is within the predetermined range, it is determined that the main object and the subsidiary object are properly arranged in the Z-axis direction.
In some embodiments, determining a steering direction for the mobile electronic device based on object information related to the primary object and the secondary object may include:
determining whether a depression angle of the camera is appropriate; and
if the depression angle is not appropriate, a direction along the X-axis to rotate around the main subject to guide the camera to a position where the depression angle is appropriate is determined as a guide direction.
In some embodiments, displaying a guidance UI indicating a guidance direction on a display using an image may include: an icon rotated about a virtual axis parallel to the X-axis, which passes through the main object, is displayed as a guide UI.
In some embodiments, determining whether the depression angle of the camera is appropriate may include:
calculating the distance between the main object and the auxiliary object along the Y axis; and
if the ratio between the distance and the screen height of the display is within a predetermined range, it is determined that the depression angle of the camera is appropriate.
The present disclosure provides an electronic device. The electronic device may include: a processor and a memory for storing instructions, wherein the instructions, when executed by the processor, cause the processor to perform a method according to the present disclosure.
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program, wherein the computer program is executed by a computer to implement a method according to the present disclosure.
Drawings
These and/or other aspects and advantages of embodiments of the disclosure will become apparent and more readily appreciated from the following description, taken in conjunction with the accompanying drawings, wherein:
fig. 1 is a circuit diagram showing a configuration example of an electronic device according to an embodiment of the present disclosure;
fig. 2 shows a situation in which an electronic device is capturing images of a plurality of objects lying on a plane P;
FIG. 3 is a flowchart illustrating a method of suggesting a shooting location for an electronic device, according to an embodiment of the present disclosure;
fig. 4 shows an example of an RGB image (left image) and a depth image (right image) captured by an electronic device;
FIG. 5 is a flowchart illustrating in detail a method of detecting an object in a depth image;
FIG. 6 is a flowchart illustrating in detail a method of determining a guidance direction of a mobile electronic device and displaying a guidance UI;
FIG. 7 illustrates a guiding direction along the Z-axis near the main object;
FIG. 8 is an example of an image displayed on an electronic device, the image including an RGB image of an object and a guidance UI;
FIG. 9 shows the guiding direction in the X-Y plane;
FIG. 10 is a diagram illustrating a method of determining whether a primary object is located in a suitable position in the X-Y plane;
FIG. 11 is an example of an image displayed on an electronic device, the image including an RGB image of an object and a guidance UI;
FIG. 12 shows a guiding direction along the Y-axis rotating around a main object;
fig. 13 is a diagram illustrating angles θa and θt in the X-Z plane for determining whether the primary object and the secondary object are properly arranged in the Z-axis direction;
FIG. 14 is an example of an image displayed on an electronic device, the image including an RGB image of an object and a guidance UI;
fig. 15 shows a guiding direction along the X-axis rotating around the main object;
fig. 16 is a diagram illustrating a method of determining whether the depression angle of the camera is appropriate; and
fig. 17 is an example of an image displayed on an electronic device, the image including an RGB image of an object and a guidance UI.
Detailed Description
Embodiments of the present disclosure will be described in detail, and examples of the embodiments will be illustrated in the accompanying drawings. Throughout the specification, identical or similar elements and elements having identical or similar functions are denoted by identical reference numerals. The embodiments described herein with reference to the drawings are illustrative and are intended to be illustrative of the disclosure, but should not be construed as limiting the disclosure.
< electronic device 100>
The electronic device 100 will be described with reference to fig. 1. Fig. 1 is a circuit diagram showing an example of a configuration of an electronic apparatus 100 according to an embodiment of the present disclosure.
In this embodiment, the electronic device 100 is a mobile device such as a smart phone or the like, but may be other types of electronic devices equipped with one or more camera modules.
As shown in fig. 1, the electronic device 100 includes a camera module 10, a distance sensor module 20, and an image signal processor 30 that controls the camera module 10 and the distance sensor module 20. The image signal processor 30 performs image processing on the image acquired by the sub-camera module 10 and the depth image acquired from the distance sensor module 20.
As shown in fig. 1, the camera module 10 includes a main camera module 11 and a sub camera module 12 for binocular stereoscopic viewing (binocular stereo viewing).
As shown in fig. 1, the main camera module 11 includes a first lens 11a capable of focusing on an object, a first image sensor 11b detecting an image input via the first lens 11a, and a first image sensor driver 11c driving the first image sensor 11 b.
As shown in fig. 1, the sub-camera module 12 includes a second lens 12a capable of focusing on an object, a second image sensor 12b detecting an image input via the second lens 12a, and a second image sensor driver 12c driving the second image sensor 12 b.
The main camera module 11 captures a main camera image. Similarly, the secondary camera module 12 captures secondary camera images. The primary and secondary camera images may be color images (e.g., RGB images) or monochrome images.
The distance sensor module 20 captures a depth image. Specifically, the distance sensor module 20 acquires time of flight (ToF) depth information by emitting pulsed light to an object and detecting reflected light from the object. The ToF depth information indicates an actual distance between the electronic device 100 and the object.
The image signal processor 30 controls the main camera module 11, the sub camera module 12, and the distance sensor module 20 to capture an image. Based on the primary camera image, the secondary camera image, and the ToF depth information, an image with a foreground may be obtained.
Further, as shown in fig. 1, the electronic device 100 includes a global navigation satellite system (global navigation satellite system, GNSS) module 40, a wireless communication module 41, a CODEC (CODEC) 42, a speaker 43, a microphone 44, a display module 45, an input module 46, an inertial measurement unit (inertial measurement unit, IMU) 47, a main processor 48, and a memory 49.
The GNSS module 40 measures the current position of the electronic device 100. The wireless communication module 41 performs wireless communication with the internet. The CODEC 42 bidirectionally performs encoding and decoding using a predetermined encoding/decoding method. The speaker 43 outputs sound based on the sound data decoded by the CODEC 42. The microphone 44 outputs sound data to the CODEC 42 based on the input sound.
The display module 45 displays various information such as a main camera image so that a user can check the main camera image. The display module 45 displays images captured by the camera module 10 in real time.
The input module 46 inputs information by an operation of a user. For example, the input module 46 inputs instructions to capture and store images displayed on the display module 45.
The IMU 47 detects angular velocity and acceleration of the electronic device 100. The posture of the electronic device 100 can be grasped from the measurement result of the IMU 47. The display module 45 may display an image according to the posture of the electronic device 100.
The main processor 48 controls a Global Navigation Satellite System (GNSS) module 40, a wireless communication module 41, a CODEC 42, a speaker 43, a microphone 44, a display module 45, an input module 46, and an IMU 47.
The memory 49 stores data of an image captured by the camera module 10, data of a depth image captured by the distance sensor module 20, and a program running on the image signal processor 30 and/or the main processor 48.
Regarding the configuration of the electronic apparatus 100, one of the main camera module 11 and the sub camera module 12 may be omitted, and the distance sensor module 20 may be omitted. Thus, the camera module 10 is not necessary for the present disclosure. Also, the distance sensor module 20 is not necessary.
Herein, an XYZ coordinate system used by the electronic apparatus 100 will be described with reference to fig. 2. Fig. 2 shows a case where the electronic device 100 is capturing images of a first object S1 and a second object S2. Objects S1 and S2 lie on a plane P, such as a table surface or a floor surface.
The X-axis is the horizontal axis in the preview image displayed on the display module 45. The Y-axis is the vertical axis in the preview image. The X-Y plane is parallel to the display of the display module 45. The Z axis is an axis orthogonal to both the X axis and the Y axis. The Z-axis indicates a direction along the depth direction. The origin of the XYZ coordinate system may be located at the camera module 10. The XYZ coordinate system may be updated according to the pose of the electronic device 100 detected by the IMU 47.
< method of suggesting shooting position >
A method of suggesting a photographing position for a user of the electronic device 100 attempting to capture images of the first object S1 and the second object S2 will be described. Fig. 3 is a flow chart illustrating a method according to an embodiment of the present disclosure.
In step S1, the image signal processor 30 acquires an image captured by the camera module 10. As shown in fig. 4, an image I1 is acquired in this step. The first object S1 and the second object S2 lie on a plane P.
An RGB image may be acquired as an image. The image may be captured using the primary camera module 11 or the secondary camera module 12.
In step S2, the image signal processor 30 acquires a depth image captured by the distance sensor module 20. As shown in fig. 4, a depth image I2 is acquired in this step. The depth image I2 corresponds to the image I1 acquired in step S1. Similar to the image I1, the first object S1 and the second object S2 are on a plane P in the depth image I2.
Alternatively, when the electronic apparatus 100 is not provided with the distance sensor module 20, the depth image may be acquired by performing stereoscopic processing (stereo processing) on the main camera image and the sub camera image.
Alternatively, when the camera module 10 has only one of the main camera module 11 and the sub camera module 12, or when the electronic device 100 is not provided with the distance sensor module 20, the depth image may be acquired by moving a single camera to perform "moving stereoscopic processing". Visual synchrony positioning and mapping (simultaneous localization and mapping, SLAM) can be used as a mobile stereo processing technique.
In step S3, the image signal processor 30 detects a plurality of objects in the depth image I2 captured in step S2, and selects a main object and a sub object from the plurality of objects. Details of this step will be described with reference to fig. 5. Fig. 5 is a flowchart showing an example of step S3.
In step S31, the image signal processor 30 detects a plane in which the object is located. For example, in this step, a random sample consensus (RANSAC) method may be used to detect the plane.
In step S32, the image signal processor 30 determines whether a plane is detected. If a plane is detected, step S33 is performed. Otherwise, the detection flow is ended. If the detected plane is not flat, the detection processing flow is also ended.
In step S33, the image signal processor 30 excludes the plane P and the background B.
In step S34, the image signal processor 30 performs a clustering process to group small objects into a group. For example, density-based spatial clustering with noise (DBSCAN) may be used for the clustering process.
In step S35, the image signal processor 30 ignores the small object to remove noise. Thereby acquiring information about the position and size of each remaining object in the depth image.
In step S36, the image signal processor 30 selects a main object and an auxiliary object from the objects left after step S35. For example, when the electronic device 100 is provided with the distance sensor module 20, the processor 30 selects an object closest to the electronic device 100 as a main object and an object second closest to the electronic device 10 as an auxiliary object. In this example, the first object S1 is selected as a primary object and the second object S2 is selected as a secondary object.
Alternatively, the processor 30 may select the main object and the auxiliary object based on the sizes of the main object and the auxiliary object.
Returning to the flowchart of fig. 3, step S4 following step S3 will be described.
In step S4, the image signal processor 30 determines a guiding direction for the mobile electronic device 100 based on the object information. The object information is the information related to the main object and the auxiliary object selected in step S3. Specifically, the object information indicates at least one of: the location of the selected object(s), the size of the selected object(s), and the distance between the selected object(s) and the camera module 10.
In step S5, the display module 45 displays the guidance UI using the image. The guidance UI indicates the guidance direction determined in step S4.
Details of step S4 and step S5 will be described with reference to fig. 6. Fig. 6 shows a flowchart that is performed each time the camera module 10 captures an image.
In step S41, the image signal processor 30 determines whether the camera module 10 is sufficiently close to the main subject S1. If it is determined that the camera module 10 is not sufficiently close to the main object S1, as shown in fig. 7, the image signal processor 30 determines a direction along the Z-axis to be close to the main object to guide the camera module 10 closer to the main object S1 as a Guide Direction (GD).
To determine whether the camera module 10 is sufficiently close to the main object S1, the image signal processor 30 may calculate a ratio of the size of the main object S1 in the image to the overall size of the image. The size of the main object S1 may be an area surrounding the shape (e.g., rectangular frame) of the main object S1. If the ratio is greater than a predetermined value (e.g., 14%), the image signal processor 30 determines that the camera module 10 is sufficiently close to the main subject S1.
Alternatively, the image signal processor 30 may acquire the distance between the camera module 10 and the main object S1 from the depth image obtained by the distance sensor module 20. If the distance is shorter than a predetermined distance (e.g., 30 cm), the image signal processor 30 determines that the camera module 10 is sufficiently close to the main object S1.
If it is determined that the camera module 10 is not sufficiently close to the main subject S1, the process advances to step S51.
In step S51, the electronic apparatus 100 displays a message for requesting the user of the electronic apparatus 100 to move the camera 10 closer to the main object S1 as a guidance UI. For example, the guide UI1 "is displayed close to-! ". The guidance UI1 may be overlaid on the real-time image displayed on the display module 45. Guiding the UI1 makes it easy for the user to know that he/she should move the electronic device 100 in the direction of the Z-axis direction. The user follows the guidance UI1 to move the electronic device 100 (camera module 10) closer to the main object.
Alternatively, the guidance UI1 may be a voice message.
Otherwise (i.e., in the case where the camera 10 is sufficiently close to the main subject S1), the process advances to step S42.
In step S42, the image signal processor 30 determines whether the main object S1 is located at an appropriate position in the X-Y plane of the image captured by the camera module 10. If it is determined that the main object S1 is not located at the proper position in the X-Y plane, the image signal processor 30 determines a direction in the X-Y plane in which the camera module 10 is guided to the proper position as a guiding direction GD, as shown in fig. 9.
Fig. 10 is a diagram for explaining a method of determining whether a main object is located at an appropriate position in the X-Y plane. In this example, the RGB image is actually divided into nine equal parts by two horizontal lines HL1, HL2 and two vertical lines VL1, VL 2. The intersection of the horizontal line HL1 and the vertical line VL1 is the target point T. The target point T is the lower third grid point. The photographing position is suggested such that the reference point of the main object is located at the target point T or in the region including the target point T.
For example, the image signal processor 30 calculates the center of gravity C1 of the main object S1 as a reference point, and determines that the main object S1 is located at an appropriate position in the X-Y plane if the center of gravity C1 is located within an area centered on the target point T.
Alternatively, the image signal processor 30 may calculate a first reference point of the main object S1 and a second reference point of the sub object S2. The first reference point may be a center of gravity C1 of the main object S1, and the second reference point may be a center of gravity C2 of the auxiliary object S2. If a first reference point (e.g., center of gravity C1) of the main object S1 is located within a predetermined area including the target point T and a second reference point (e.g., center of gravity C2) of the auxiliary object S2 is located on the opposite side of the main object, the image signal processor 30 determines that the main object S1 is located at an appropriate position in the X-Y plane. In other words, as shown in fig. 10, the center of gravity C2 is located on the diagonal line of the image. Here, the diagonal line passes through the main object S1.
If it is determined that the main object S1 is not located at an appropriate position in the X-Y plane, the process proceeds to step S52.
In step S52, the electronic apparatus 100 displays an arrow indicating the determination direction as a guidance UI. For example, as shown in fig. 11, the displayed guidance UI2 is, for example, a cell phone icon with an arrow. The guidance UI2 may be overlaid on the real-time image displayed on the display module 45. The guidance UI2 makes it easy for the user to know the direction in which he/she should move the electronic device 100 in the X-Y plane. The user moves the electronic device 100 (camera module 10) in the X-Y plane following the guidance UI2 so that the main object is located at a corner of the screen.
Alternatively, the guidance UI2 may be a voice message.
Alternatively, a frame M1 indicating the main object S1 may be displayed as shown in fig. 11 so that the user of the electronic device 100 easily recognizes the main object.
Otherwise (i.e., in the case where the main object S1 is located at an appropriate position in the X-Y plane), the process advances to step S43.
In step S43, the image signal processor 30 determines whether the main object S1 and the subsidiary object S2 are properly arranged in the Z-axis direction. As shown in fig. 12, if it is determined that the main object S1 and the sub-object S2 are not properly arranged in the Z-axis direction, the image signal processor 30 will determine the following direction as the guiding direction: in this direction, it rotates around the main object S1 along the Y-axis to guide the camera module 10 to a position where the main object S1 and the sub-object S2 are properly arranged in the Z-axis direction.
In the present disclosure, whether the primary object S1 and the secondary object S2 are properly arranged in the Z-axis direction is determined based on the angle θa formed by the positional relationship between the primary object S1 and the secondary object S2. Fig. 13 is a diagram for explaining the angle θa and the target angle θt in the X-Z plane. The angle θa is an angle between the line L1 and the line L2 parallel to the X axis. Line L1 connects the main object S1 and the auxiliary object S2. For example, the line L1 is a line connecting the center of gravity C1 and the center of gravity C2. The target angle θt is an angle defining the determined range. For example, the target angle θt is 45 °.
More specifically, first, the image signal processor 30 calculates the angle θa, and then, if the angle θa is within a predetermined range defined by the target angle θt, determines that the main object S1 and the sub-object S2 are properly arranged in the Z-axis direction. The predetermined range is a range centered on the target angle θt (for example, θt±10°).
If it is determined that the main object S1 and the sub-object S2 are not properly arranged in the Z-axis direction, the process proceeds to step S53.
In step S53, the electronic apparatus 100 will display, as a guidance UI, an icon that rotates around a virtual axis parallel to the Y axis, which passes through the main object S1. For example, as shown in fig. 14, the displayed guidance UI3 is, for example, an animated cell phone icon that rotates around a virtual axis. The guidance UI3 may be overlaid on the real-time image displayed on the display module 45. The guidance UI3 makes it easy for the user to know the direction of rotation in which he/she should move the electronic device 100 in the X-Z plane. The user moves the electronic device 100 (the camera module 10) in the X-Z plane following the guidance UI3 so that the main object and the sub object are located in diagonal positions.
Alternatively, the guidance UI3 may be a voice message.
Alternatively, as shown in fig. 14, a frame M1 indicating the main object S1 may be displayed so that the user of the electronic device 100 easily recognizes the main object.
Otherwise (i.e., in the case where the primary object S1 and the secondary object S2 are properly arranged in the Z-axis direction), the process proceeds to step S44.
In step S44, the image signal processor 30 determines whether the depression angle of the camera module 10 is appropriate. If it is determined that the depression angle is not appropriate, the image signal processor 30 determines, as a guide direction GD, a direction along the X-axis to rotate around the main subject S1 to guide the camera module 10 to a position where the depression angle is appropriate, as shown in fig. 15.
In the present disclosure, whether the depression angle is appropriate is determined based on the distance Dh along the Y axis between the main object S1 and the sub object S2. Fig. 16 is a diagram for explaining a method of determining whether the depression angle is appropriate. In this example, distance Dh is the vertical distance between center of gravity C1 and center of gravity C2. If the ratio of the distance Dh to the screen height H of the display is within a predetermined range (e.g., 20% ± 10%), it is determined that the depression angle of the camera module 10 is appropriate.
More specifically, first, the image signal processor 30 calculates the distance Dh, and then, if the ratio is within a predetermined range, determines that the depression angle of the camera module 10 is appropriate.
If it is determined that the depression angle of the camera module 10 is not appropriate, the process advances to step S54.
In step S54, the electronic apparatus 100 displays an icon rotated about a virtual axis parallel to the X axis as a guidance UI. The virtual axis passes through the main object S1. For example, as shown in fig. 16, the displayed guidance UI4 is, for example, an animated cell phone icon that rotates around a virtual axis. The guidance UI4 may be overlaid on the real-time image displayed on the display module 45. The guiding UI4 makes it easy for the user to know the direction in which he/she should move the electronic device 100, rotating around the virtual axis. According to the guidance UI4, the user easily moves the electronic device 100 (the camera module 10) around the virtual axis, so that an appropriate depression angle can be obtained.
Alternatively, the guidance UI4 may be a voice message.
Alternatively, as shown in fig. 16, a frame M1 indicating the main object S1 may be displayed so that the user of the electronic device 100 easily recognizes the main object.
Otherwise (i.e., in the case where the depression angle of the camera module 10 is appropriate), the process advances to step S55.
In step S55, the electronic apparatus 100 displays a message informing the user that the electronic apparatus 100 is in a good position to photograph the main object S1 and the sub-object S2 as a guidance UI. For example, as shown in fig. 17, the guidance UI5 "ready" is displayed in a manner superimposed on the real-time image.
Alternatively, the guidance UI5 may be a voice message.
The main processor 48 may perform at least one of the steps described above.
According to the above disclosure, taking into account the positional relationship among the main subject S1, the subsidiary subject S2, and the camera module 10, a photographing position can be suggested to the user of the electronic device 100 so that he/she can easily capture an image having a distance sensing composition. Therefore, even if the user does not have special skill or experience in capturing an image, he/she can capture an image having a sense of distance between objects. Further, the electronic apparatus 100 can enhance the sense of distance by applying a foreground process to an image.
In describing embodiments of the present disclosure, it should be understood that terms such as "central," "longitudinal," "transverse," "length," "width," "thickness," "upper," "lower," "front," "rear," "back," "left," "right," "vertical," "horizontal," "top," "bottom," "interior," "exterior," "clockwise," and "counterclockwise" should be interpreted as referring to the directions or locations depicted or shown in the drawings at the time of discussion. These related terms are only used to simplify the description of the present disclosure and do not indicate or imply that the devices or elements referred to must have a particular orientation or must be constructed or operated in a particular orientation. Accordingly, these terms should not be construed as limiting the present disclosure.
Furthermore, terms such as "first" and "second" are used herein for descriptive purposes and are not intended to indicate or imply relative importance or significance or the number of technical features indicated. Thus, features defined as "first" and "second" may include one or more of the features. In the specification of the present disclosure, unless otherwise specified, "a plurality" means "two or more than two".
In describing embodiments of the present disclosure, unless otherwise indicated or limited, terms "mounted," "connected," "coupled," and the like are used broadly and may be, for example, fixed, removable, or integral, or may be a mechanical or electrical connection, or may be a direct or indirect connection via an intermediate structure, or may be an internal communication of two elements as would be understood by one of ordinary skill in the art in view of the particular circumstances.
In embodiments of the present disclosure, unless specified or limited otherwise, structures that "on" or "under" a first feature may include embodiments in which the first feature is in direct contact with the second feature, as well as embodiments in which the first feature and the second feature are not in direct contact with each other, but are contacted by additional features formed therebetween. Furthermore, a first feature "on (touching)", "over" or "on top of" a second feature may include embodiments where the first feature is "on" the second feature, on "or" on top of "the second feature, orthogonally or obliquely, or simply means that the first feature is at a higher position than the second feature. While a first feature "under", "under" or "beneath" a second feature may include embodiments in which the first feature is "under", "under" or "beneath" the second feature, either orthogonally or obliquely, or by itself, it is meant that the first feature is at a lower position than the second feature.
The above illustration provides various embodiments and examples to implement different structures of the present disclosure. To simplify the present disclosure, certain elements and arrangements are described above. However, these elements and arrangements are merely examples and are not intended to limit the present disclosure. Further, reference numerals and/or drawing letters may be repeated in the various examples of the disclosure. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations. In addition, the present disclosure provides examples of different processes and materials. However, those skilled in the art will appreciate that other processes and/or materials may be used.
Reference throughout this specification to "an embodiment," "some embodiments," "an example embodiment," "an example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above-identified phrases in various places throughout this specification are not necessarily all referring to the same embodiment or example of the disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in the flow diagrams or otherwise described herein can be understood as comprising one or more modules, segments, or portions of code that comprise executable instructions for implementing specific logical functions or steps in the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions can be implemented in an order other than that shown or discussed, including in substantially the same order or in an opposite order, as would be understood by those skilled in the art.
Logic and/or steps (e.g., a particular sequence of executable instructions for performing a logic function) described elsewhere herein or shown in a flowchart may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device that executes the instructions. For the purposes of this description, a "computer-readable medium" can be any apparatus that can be used by or in connection with an instruction execution system, apparatus, or device that can be adapted to contain, store, communicate, propagate, or transport the program. More specific examples of the computer-readable medium include, but are not limited to: an electronic connection (electronic device) having one or more wires, a portable computer housing (magnetic device), random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), fiber optic equipment, and portable compact disc read-only memory (CDROM). Furthermore, the computer readable medium may even be paper or other suitable medium upon which the program can be printed, as the paper or other suitable medium may be optically scanned, then compiled, decoded or otherwise processed in a suitable manner, and then stored in a computer memory, for example, when the program is required to be electronically obtained.
It should be understood that each portion of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, the steps or methods may be implemented by one or a combination of the following techniques known in the art, similar to another embodiment: discrete logic circuits having logic gates for implementing data signal logic functions, application specific integrated circuits having logic gates in appropriate combinations, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those skilled in the art will appreciate that all or part of the steps in the above described exemplary methods of the present disclosure may be implemented using program command related hardware. The program may be stored in a computer-readable storage medium when run on a computer, the program comprising one or a combination of the steps of the method embodiments of the present disclosure.
Furthermore, each functional unit of embodiments of the present disclosure may be integrated in a processing module, or the units may be physically separated, or two or more units are integrated in a processing module. The integrated module can be implemented in hardware or in the form of a software functional module. When the integrated module is implemented in the form of a software functional module and sold or used as a stand-alone product, the integrated module may be stored in a computer-readable storage medium.
The storage medium may be a read-only memory, a magnetic disk, a CD, or the like.
Although embodiments of the present disclosure have been shown and described, it will be understood by those skilled in the art that these embodiments are illustrative and not to be construed as limiting the present disclosure, and that changes, modifications, substitutions, and alterations may be made to the embodiments without departing from the scope of the disclosure.

Claims (19)

1. A method of suggesting a shooting location of an electronic device having a camera and a display for displaying images captured by the camera in real time, the method comprising:
acquiring an image captured by the camera;
acquiring a depth image corresponding to the image;
detecting a plurality of objects in the depth image, and selecting a main object and an auxiliary object from the plurality of objects;
determining a guiding direction for moving the electronic device based on object information related to the main object and the auxiliary object; and
a guidance User Interface (UI) indicating the guidance direction is displayed on the display using the image.
2. The method of claim 1, wherein the guidance UI is overlaid on the image captured by the camera.
3. The method of claim 1 or 2, wherein determining the guiding direction for moving the electronic device based on the object information related to the primary object and the secondary object comprises:
determining whether the camera is sufficiently close to the primary object; and
if the camera is not sufficiently close to the main object, a direction along a Z axis perpendicular to an X-Y plane parallel to a surface of the display is determined as the guiding direction.
4. A method according to claim 3, wherein displaying the guidance UI indicating the guidance direction with the image on the display comprises: a message is displayed as the guidance UI that requires a user of the electronic device to bring the camera close to the main object.
5. The method of claim 3 or 4, wherein determining whether the camera is sufficiently close to the master object comprises:
calculating a ratio between a size of the main object in the image and a size of the image; and
if the ratio is greater than a predetermined value, it is determined that the camera is sufficiently close to the primary object.
6. The method of claim 3 or 4, wherein determining whether the camera is sufficiently close to the master object comprises:
acquiring a distance between the camera and the main object according to the depth image, wherein the depth image is acquired by a distance sensor module; and
if the distance is less than a predetermined distance, it is determined that the camera is sufficiently close to the primary object.
7. The method of any of claims 1-6, wherein determining the steering direction for moving the electronic device based on the object information related to the primary object and the secondary object comprises:
determining whether the primary object is in a suitable position in an X-Y plane parallel to a surface of the display; and
if the main object is not located at a proper position in the X-Y plane, a direction in the X-Y plane for guiding the camera to a proper position is determined as the guiding direction.
8. The method of claim 7, wherein displaying a guidance UI indicating the guidance direction on the display with the image comprises: an arrow indicating the determined direction is displayed as the guidance UI.
9. The method of claim 7 or 8, wherein determining whether the master object is in a suitable position in the X-Y plane parallel to a surface of the display comprises:
calculating a reference point of the main object;
if the reference point is located within a predetermined area, it is determined that the primary object is located at a suitable position in the X-Y plane.
10. The method of claim 7 or 8, wherein determining whether the master object is in a suitable position in the X-Y plane parallel to a surface of the display comprises:
calculating a first reference point of the main object and a second reference point of the auxiliary object;
if the first reference point of the primary object is located within a predetermined area and the second reference point of the secondary object is located on an opposite side of the primary object, it is determined that the primary object is located at a suitable position in the X-Y plane.
11. The method according to claim 9 or 10, wherein the predetermined area is an area centered on the lower third grid point.
12. The method of any of claims 1-11, wherein determining the steering direction for moving the electronic device based on the object information related to the primary object and the secondary object comprises:
determining whether the primary object and the secondary object are properly arranged in a Z-axis direction perpendicular to an X-Y plane, wherein the X-Y plane is parallel to a surface of the display; and
if the main object and the subsidiary object are not properly arranged in the Z-axis direction, the following direction is determined as the guiding direction: in the direction, it is rotated around a main object along a Y-axis to guide a camera to a position where the main object and the subsidiary object are properly arranged in the Z-axis direction.
13. The method of claim 12, wherein displaying the guidance UI indicating the guidance direction with the image on the display comprises: an icon rotated around a virtual axis parallel to the Y axis, which passes through the main object, is displayed as the guide UI.
14. The method of claim 12 or 13, wherein determining whether the primary object and the secondary object are properly arranged in the Z-axis direction perpendicular to the X-Y plane, the X-Y plane being parallel to a surface of the display, comprises:
calculating an angle in the X-Z plane, the angle being an angle between a first line connecting the primary object and the secondary object and a second line parallel to the X axis; and
if the angle is within a predetermined range, it is determined that the main object and the auxiliary object are properly arranged in the Z-axis direction.
15. The method of any of claims 1-14, wherein determining the steering direction for moving the electronic device based on the object information related to the primary object and the secondary object comprises:
determining whether a depression angle of the camera is appropriate; and
if the depression angle is not appropriate, the following direction is determined as the guiding direction: in the direction, the camera is rotated around the main subject along the X-axis to be guided to the depression angle appropriate position.
16. The method of claim 15, wherein displaying the guidance UI indicating the guidance direction with the image on the display comprises: an icon rotated about a virtual axis parallel to the X axis, which passes through the main object, is displayed as the guide UI.
17. The method of claim 15 or 16, wherein determining whether the depression angle of the camera is appropriate comprises:
calculating a distance between the primary object and the secondary object along a Y-axis; and
if the ratio between the distance and the screen height of the display is within a predetermined range, the depression angle of the camera is determined to be appropriate.
18. An electronic device comprising a processor and a memory for storing instructions, wherein the instructions, when executed by the processor, cause the processor to perform the method of any one of claims 1 to 17.
19. A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a computer to implement the method of any of claims 1 to 17.
CN202180094209.7A 2021-02-20 2021-02-20 Method for suggesting shooting position of electronic equipment and electronic equipment Pending CN116868577A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/077127 WO2022174432A1 (en) 2021-02-20 2021-02-20 Method of suggesting shooting position for electronic device and electronic device

Publications (1)

Publication Number Publication Date
CN116868577A true CN116868577A (en) 2023-10-10

Family

ID=82931966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180094209.7A Pending CN116868577A (en) 2021-02-20 2021-02-20 Method for suggesting shooting position of electronic equipment and electronic equipment

Country Status (2)

Country Link
CN (1) CN116868577A (en)
WO (1) WO2022174432A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011223250A (en) * 2010-04-08 2011-11-04 Nec Corp Photographing assisting apparatus, photographing assisting method, and program used therewith
CN107169148B (en) * 2017-06-21 2020-05-15 北京百度网讯科技有限公司 Image searching method, device, equipment and storage medium
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN108650431B (en) * 2018-05-14 2021-01-15 联想(北京)有限公司 Shooting control method and device and electronic equipment
CN108810409A (en) * 2018-06-07 2018-11-13 宇龙计算机通信科技(深圳)有限公司 One kind is taken pictures guidance method, terminal device and computer-readable medium
EP3923065A4 (en) * 2019-02-07 2022-04-06 FUJIFILM Corporation Photographing system, photographing spot setting device, photographing device, and photographing method
CN109889730B (en) * 2019-04-04 2020-12-18 中科创达软件股份有限公司 Prompting method and device for adjusting shooting angle and electronic equipment

Also Published As

Publication number Publication date
WO2022174432A1 (en) 2022-08-25

Similar Documents

Publication Publication Date Title
US10217288B2 (en) Method for representing points of interest in a view of a real environment on a mobile device and mobile device therefor
US9661214B2 (en) Depth determination using camera focus
CN110148178B (en) Camera positioning method, device, terminal and storage medium
EP2672459A1 (en) Apparatus and method for providing augmented reality information using three dimension map
US20210158560A1 (en) Method and device for obtaining localization information and storage medium
US9683833B2 (en) Surveying apparatus having a range camera
WO2020042968A1 (en) Method for acquiring object information, device, and storage medium
CN113052919A (en) Calibration method and device of visual sensor, electronic equipment and storage medium
KR101996241B1 (en) Device and method for providing 3d map representing positon of interest in real time
KR101308184B1 (en) Augmented reality apparatus and method of windows form
CN110738185A (en) Form object identification method and device and storage medium
CN112308103A (en) Method and device for generating training sample
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
CN114089836B (en) Labeling method, terminal, server and storage medium
CN116868577A (en) Method for suggesting shooting position of electronic equipment and electronic equipment
CN110990728A (en) Method, device and equipment for managing point of interest information and storage medium
CN111754564A (en) Video display method, device, equipment and storage medium
CN111832338A (en) Object detection method and device, electronic equipment and storage medium
CN113378705B (en) Lane line detection method, device, equipment and storage medium
CN116998159A (en) Method for suggesting shooting position of electronic equipment and electronic equipment
WO2022110777A1 (en) Positioning method and apparatus, electronic device, storage medium, computer program product, and computer program
CN109559382A (en) Intelligent guide method, apparatus, terminal and medium
CN113066134A (en) Calibration method and device of visual sensor, electronic equipment and storage medium
KR102084161B1 (en) Electro device for correcting image and method for controlling thereof
CN112950535A (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination