CN116686284A - Electronic device, method of controlling electronic device, and computer-readable storage medium - Google Patents

Electronic device, method of controlling electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN116686284A
CN116686284A CN202180084542.XA CN202180084542A CN116686284A CN 116686284 A CN116686284 A CN 116686284A CN 202180084542 A CN202180084542 A CN 202180084542A CN 116686284 A CN116686284 A CN 116686284A
Authority
CN
China
Prior art keywords
camera image
image
new frame
processor
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180084542.XA
Other languages
Chinese (zh)
Inventor
宮内将斗
罗俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN116686284A publication Critical patent/CN116686284A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

The electronic equipment according to the embodiment of the application comprises: the imaging module is used for taking a picture of the shot object and acquiring a camera image; and a processor for controlling the imaging module and acquiring a camera image.

Description

Electronic device, method of controlling electronic device, and computer-readable storage medium
Technical Field
The present application relates to an electronic device, a method of controlling the same, and a computer-readable storage medium.
Background
Conventionally, there are electronic devices equipped with a digital camera, for example, a smart phone, which captures an object such as a person.
Currently, a technique is widely spread in which a photograph of a foreground image having an object located in the foreground or background of a subject is artificially generated from an image taken by a camera having a deep depth of field (for example, a camera of a smart phone), for example, a photograph taken using a digital single lens reflex (digital single lens reflex, DSLR) camera.
If a camera with a deep depth of field (e.g., a camera of a smart phone) is used to take a picture, an image focusing from a near portion to a far portion may be obtained. Thus, a foreground image in which a portion requiring attention becomes clearer and the foreground and background of the portion become blurred is generated by image processing.
In image processing for such a foreground image, there is a problem that frames of poor quality including, for example, a strong motion blurred image or a broken image cannot be processed well without obtaining a correct result. Thus, if the algorithm has a rolling average, the algorithm consumes energy and stores garbage data.
Disclosure of Invention
The present disclosure is directed to solving at least one of the above-mentioned technical problems. Accordingly, there is a need for providing an electronic device and a method of controlling an electronic device.
The present disclosure provides an electronic device including:
the imaging module is used for taking a picture of the shot object and acquiring a camera image; and
a processor for controlling the imaging module to acquire a camera image and a depth image, processing the acquired camera image based on the depth image, and outputting a processed camera image by processing the acquired camera image based on the depth image;
wherein, after acquiring a previous frame of camera image, the processor controls the imaging module to acquire a new frame of camera image, the new frame of camera image containing the image of the shot object; and
the processor determines whether the new frame of camera image is good or bad; and is also provided with
Wherein when the processor determines that the new frame camera image is good, the processor calculates depth information corresponding to the new frame camera image, and acquires a depth image corresponding to the camera image based on the calculated depth information;
on the other hand, when the processor determines that the new frame camera image is bad, the processor acquires the depth image of the new frame camera image based on the depth image corresponding to the previous frame camera image without calculating the depth information corresponding to the new frame camera image.
In the case of the electronic device in question,
wherein the image processing of the camera image comprises: a foreground image is generated on the camera image.
In the case of the electronic device in question,
the electronic device further comprises an inertial measurement unit for detecting an acceleration of the electronic device;
the inertial measurement unit detects acceleration of the electronic device in a time period from a first moment to a second moment, wherein the first moment is a moment of capturing the previous frame of camera image, and the second moment is a moment of capturing the new frame of camera image; and
wherein the processor determines that the new frame camera image is good when the maximum value of the acceleration of the electronic device detected by the inertia measurement unit is less than a preset first threshold value;
on the other hand, when the maximum value of the acceleration of the electronic device detected by the inertia measurement unit is equal to or greater than the first threshold value, the processor determines that the new frame camera image is bad.
In the case of an electronic device such as an electronic device,
the inertial measurement unit detects acceleration in the x-axis direction, acceleration in the y-axis direction and acceleration in the z-axis direction of the electronic device respectively; and
wherein the processor determines that the new frame camera image is good when the maximum value of all accelerations of the electronic device in the x-axis direction, the y-axis direction, and the z-axis direction detected by the inertial measurement unit is less than the first threshold;
on the other hand, when the maximum value of the acceleration of the electronic device in any one of the x-axis direction, the y-axis direction, and the z-axis direction detected by the inertial measurement unit is equal to or greater than the first threshold value, the processor determines that the new frame camera image is bad.
In the case of an electronic device such as an electronic device,
wherein the imaging module further includes an additional camera module for capturing the subject and acquiring a differential image having pixel values according to a change of the captured image with time;
wherein the additional camera module acquires the differential image during a period of time from a first time when the previous frame of camera image is captured to a second time when the new frame of camera image is captured;
wherein the processor obtains a sum of maximum values of each pixel of the differential image obtained by the additional camera module; and
when the total value of the maximum values of all pixels of the differential image is smaller than a preset second threshold value, the processor determines that the new frame of camera image is good;
on the other hand, when the total value of the maximum values of all the pixels of the differential image is equal to or greater than the second threshold value, the processor determines that the new frame camera image is bad.
In the case of an electronic device such as an electronic device,
the imaging module captures the shot object at the first moment and acquires the camera image of the previous frame, and then captures the shot object at the second moment and acquires the camera image of the new frame;
wherein, for each pixel, the processor obtains a difference between a pixel value of the pixel of the previous frame camera image and a pixel value of a corresponding pixel of the new frame camera image;
wherein the processor obtains a sum of differences of all pixels of the camera image; and is also provided with
Wherein when the total value of the differences of all pixels is smaller than a preset third threshold value, the processor determines that the new frame camera image is good;
on the other hand, when the total value of the differences of all pixels is equal to or greater than the third threshold value, the processor determines that the new frame camera image is bad.
In the case of an electronic device such as an electronic device,
wherein, this imaging module includes:
a first camera module for capturing the subject and acquiring a first camera image; and
a second camera module for capturing the subject and acquiring a second camera image; and
wherein the processor obtains the camera image based on the first camera image and the second camera image; and is also provided with
The processor calculates depth information corresponding to the camera image and acquires a depth image corresponding to the camera image based on the calculated depth information.
In the case of an electronic device such as an electronic device,
the imaging module comprises a first camera module, a second camera module and a third camera module, wherein the first camera module is used for capturing the shot object and acquiring the camera image; and
wherein the processor calculates depth information corresponding to the camera image and acquires the depth image corresponding to the camera image based on the calculated depth information.
The present disclosure provides a method for controlling an electronic device, comprising: an imaging module for taking a photograph of a subject and acquiring a camera image; and a processor for controlling the imaging module to acquire a camera image and a depth image, processing the acquired camera image based on the depth image, and outputting a processed camera image by processing the acquired camera image based on the depth image.
The method comprises the following steps:
after acquiring the previous frame of camera image, controlling the imaging module by the processor to acquire a new frame of camera image containing the image of the subject, and
determining, by the processor, whether the new frame of camera image is good or bad, and
wherein the processor calculates depth information corresponding to the new frame camera image, and when the processor determines that the new frame camera image is good, the processor acquires a depth image corresponding to the camera image based on the calculated depth information,
on the other hand, when the processor determines that the new frame camera image is bad, the processor acquires the depth image of the new frame camera image based on the depth image corresponding to the previous frame camera image without calculating the depth information corresponding to the new frame camera image.
The present disclosure provides a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method for controlling an electronic device. The electronic device includes: an imaging module for taking a photograph of a subject and acquiring a camera image; and a processor for controlling the imaging module to acquire a camera image and a depth image, processing the acquired camera image based on the depth image, and outputting a processed camera image by processing the acquired camera image based on the depth image, and
the method comprises the following steps:
after acquiring the previous frame of camera image, controlling the imaging module by the processor to acquire a new frame of camera image containing the image of the subject, and
determining, by the processor, whether the new frame of camera image is good or bad, and
wherein the processor calculates depth information corresponding to the new frame camera image, and when the processor determines that the new frame camera image is good, acquires a depth image corresponding to the camera image based on the calculated depth information,
on the other hand, when the processor determines that the new frame camera image is bad, the processor acquires the depth image for the new frame camera image based on the depth image corresponding to the previous frame camera image without calculating the depth information corresponding to the new frame camera image.
Drawings
These and/or other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following description, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a diagram showing an example of the arrangement of an electronic apparatus 100 and a subject 101 according to an embodiment of the present application.
Fig. 2 is a diagram showing an example of the configuration of the electronic apparatus 100 shown in fig. 1.
Fig. 3 is a diagram illustrating another example of the imaging module 102 of the electronic device 100 illustrated in fig. 1 and 2.
Fig. 4 is a diagram illustrating still another example of the imaging module 102 of the electronic device 100 illustrated in fig. 1 and 2.
Fig. 5 is a diagram showing an example of a processing flow in which the electronic apparatus 100 shown in fig. 1 and 2 captures a subject and outputs a processed camera image.
Fig. 6 is a diagram showing an example of a relationship between acquired camera image frames and time.
Fig. 7 is a diagram showing a relationship between a plurality of frames in which the processing shown in fig. 5 is sequentially executed and a determination result.
Fig. 8 is a diagram showing a specific example of performing the processing shown in fig. 5 on the frame (m+1), the frame (m+2), and the frame (m+3) shown in fig. 7.
Detailed Description
Reference will now be made in detail to embodiments of the present disclosure and examples of which are illustrated in the accompanying drawings. Throughout the specification, identical or similar elements and elements having identical or similar functions are denoted by identical reference numerals. The embodiments described herein with reference to the drawings are illustrative of the present disclosure and are not to be construed as limiting the present disclosure.
Fig. 1 is a diagram showing an example of the arrangement of an electronic apparatus 100 and a subject 101 according to an embodiment of the present application. Fig. 2 is a diagram showing an example of the configuration of the electronic apparatus 100 shown in fig. 1.
As shown in fig. 1 and 2, for example, the electronic apparatus 100 includes a first camera module 10, a second module 20, and an image signal processor 30. The image signal processor 30 controls the first camera module 10 and the second camera module 20, and processes camera image data acquired from the camera module 10.
In the example of fig. 1 and 2, the imaging module 102 is constituted by the first camera module 10 and the second module 20. The imaging module 102 is defined as a module that captures at least one subject 101 and acquires a camera image.
Thus, as shown in fig. 2, the imaging module 102 includes a first camera module 10 that photographs the subject 101 and acquires a first camera image, and a second camera module 20 that photographs the subject 101 and acquires a second camera image.
In the example of fig. 2, depth information of a camera image is calculated according to the parallaxes of the first camera module 10 and the second module 20, and a depth image corresponding to the camera image is acquired based on the calculated depth information.
As shown in fig. 2, the first camera module 10 includes, for example, a main lens 10a capable of focusing on a subject, a main image sensor 10b detecting an image input via the main lens 10a, and a main image sensor driver 10c driving the main image sensor 10 b.
Further, as shown in fig. 2, the first camera module 10 includes, for example, a focus & OIS actuator 10f for actuating the main lens 10a, and a focus & OIS driver 10e driving the focus & OIS actuator 10 f.
For example (shown in fig. 2), the first camera module 10 acquires a first camera image of the subject 101.
As shown in fig. 2, the second camera module 20 includes a main lens 20a capable of focusing on a subject, a main image sensor 20b detecting an image input via the main lens 20a, and a main image sensor driver 20c for driving the main image sensor 10b, for example.
Further, as shown in fig. 2, the second camera module 20 includes, for example, a focus & OIS actuator 20f for actuating the main lens 10a, and a focus & OIS driver 10e driving the focus & OIS actuator 10 f.
For example (shown in fig. 2), the second camera module 20 acquires a second camera image of the subject 101.
Further, as shown in fig. 2, for example, the electronic device 100 includes a global navigation satellite system (global navigation satellite system, GNSS) module 40, a wireless communication module 41, a codec 42, a speaker 43, a microphone 44, a display module 45, an input module 46, an inertial measurement unit (inertial measurement unit, IMU) 47, a main processor 48, and a memory 49.
For example, the GNSS module 40 measures the current location of the electronic device 100.
For example, as shown in fig. 2, the codec 42 bidirectionally performs encoding and decoding using a predetermined encoding/decoding method.
For example, the speaker 43 outputs sound based on the sound data decoded by the codec 42.
For example, the microphone 44 outputs sound data to the codec 42 based on the input sound.
The display module 45 displays predefined information. The display module 45 is, for example, a touch panel.
The input module 46 receives an input of a user (operation of the user). The input module 46 is included in, for example, a touch panel.
The IMU 47 detects, for example, angular velocity and acceleration of the electronic device 100.
The main processor 48 controls a Global Navigation Satellite System (GNSS) module 40, a wireless communication module 41, a codec 42, a speaker 43, a microphone 44, a display module 45, an input module 46 and an IMU 47.
In the example of fig. 2, the processor 103 is composed of the image signal processor 30 and the main processor 48. The processor 103 is defined as a controller that controls the imaging module 102 and acquires a camera image.
For example, the processor 103 controls the imaging module 102 to acquire a camera image and a depth image. Then, the processor 103 outputs a processed camera image (the camera image data includes the foreground image data) by processing the acquired camera image based on the depth image.
The memory 49 stores programs and data required for the image processor 30 to control the first camera module 10 and the second camera module 20, acquired image data, and programs and data required for the main processor 48 to control the electronic device 100.
For example, the memory 49 comprises a computer-readable storage medium having stored thereon a computer program which, when executed by the processor 103, implements a method for controlling the electronic device 100. For example, the method includes: after acquiring the previous frame of camera image, the processor 103 controls the imaging module 102 to acquire a new frame of camera image containing the image of the subject; and the processor 103 determines whether the new frame camera image is good or bad, wherein when the processor 103 determines that the new frame camera image is good, the processor 103 calculates depth information corresponding to the new frame camera image and acquires a depth image corresponding to the camera image according to the calculated depth information; on the other hand, when the processor 103 determines that the new frame camera image is bad, the processor 103 acquires the depth image of the new frame camera image based on the depth image corresponding to the previous frame camera image without calculating the depth information corresponding to the new frame camera image.
In the present embodiment, the electronic device 100 having the above-described configuration is a mobile phone such as a smart phone, but may be other types of electronic devices (e.g., a tablet computer and a PDA) including the imaging module 102.
As described above, in the example shown in fig. 1 and 2, the imaging module 102 includes the first camera module 10 and the second camera module 20. The first camera module 10 captures the subject 101 to acquire a first camera image. The second camera module 20 captures the subject 101 to acquire a second camera image.
In the example shown in fig. 1 and 2, the processor 103 acquires a camera image based on the first camera image and the second camera image. Then, the processor 103 calculates depth information corresponding to the camera image, and the processor 103 acquires a depth image corresponding to the camera image based on the calculated depth information.
However, the configuration of the imaging module 102 of the electronic device 100 is not limited to the configuration shown in fig. 1 and 2.
For example, fig. 3 is a diagram illustrating another example of the imaging module 102 of the electronic device 100 illustrated in fig. 1 and 2.
Thus, instead of the example shown in fig. 1 and 2, for example, as shown in fig. 3, the imaging module 102 may include only the first camera module 10.
In the example shown in fig. 3, the processor 103 calculates depth information corresponding to the camera image, and the processor 103 acquires the depth image corresponding to the camera image based on the calculated depth information.
Next, fig. 4 is a diagram showing still another example of the imaging module 102 of the electronic apparatus 100 shown in fig. 1 and 2.
As shown in fig. 4, the imaging module 102 may also include an additional camera module 50.
The additional camera module 50 shown in fig. 4 captures a subject 101 and acquires a differential image having pixel values due to the change of the captured image with time.
The additional camera module 50 is, for example, an event camera having a shooting timing close to the imaging module 102 and performance of FPS (frames per second).
In the example shown in fig. 4, as described below, based on the differential image acquired by the additional camera module 50, the processor 103 determines whether the new frame camera image is a good camera image (a valid camera image) or a bad camera image (an invalid camera image).
Bad frames with such bad camera images include strong motion blurred images or broken images. On the other hand, good frames with good camera images do not include strong motion blurred images and broken images.
[ examples of methods of controlling electronic devices ]
Next, an example of a method of controlling the electronic apparatus 100 having the above-described configuration and function will be described.
Fig. 5 is a diagram showing an example of a processing flow in which the electronic apparatus 100 shown in fig. 1 and 2 captures a subject and outputs a processed camera image. Fig. 6 is a diagram showing an example of a relationship between acquired camera image frames and time.
Here, as shown in fig. 6, the new frame (denoted by N) is the latest frame at the time of shooting. Further, the previous frame (denoted as N-1) is a frame immediately preceding the new frame (N).
In the example shown in fig. 5, a control method for performing image processing on the new frame camera image (N) shown in fig. 6 will be described below.
Here, for example, as shown in step S1 of fig. 5, the imaging module 102 captures at least the subject 101 and acquires a camera image.
Next, as shown in step S2 of fig. 5, the processor 103 controls the imaging module 102 to acquire a new frame camera image (N) including the image of the subject 101.
Next, as shown in step S3 of fig. 5, the processor 103 determines whether the acquired new frame camera image (N) is good or bad.
As described above, the bad frame having the bad camera image includes a strong motion blurred image or a broken image. On the other hand, good frames with good camera images do not include strong motion blurred images and broken images.
Next, when the processor 103 determines that the new frame camera image (N) is good in step S3 of fig. 5, the processor 103 calculates depth information corresponding to the camera image in step S4 of fig. 5. Further, in step S5 of fig. 5, the processor 103 acquires a depth image corresponding to the camera image based on the calculated depth information. Then, in step S6 of fig. 5, the processor 103 processes the camera image based on the acquired depth image.
On the other hand, when the processor 103 determines that the new frame camera image (N) is bad in step S3 of fig. 5, the processor 103 does not calculate depth information corresponding to the camera image in step S4 of fig. 5. Then, in step S5 of fig. 5, the processor 103 acquires a depth image corresponding to a camera image of a previous frame (N-1) preceding the new frame (N) based on the depth information corresponding to the camera image. Then, in step S6 of fig. 5, the processor 103 processes the camera image based on the acquired depth image.
The image processing of the camera image includes: a foreground image is generated on the camera image.
Then, in step S7 of fig. 5, the processor 103 outputs the processed camera image (the camera image data includes the foreground image data).
For the method for generating the depth image, a method using monocular, stereoscopic, or TOF (time of flight) data is basically applied. However, the present application can also be applied to other depth image generation methods.
Here, some specific examples of the above-described good/bad determination method of the frame shown in step S3 of fig. 5 will be described below.
[ first example of determination method ]
As described above, in the example shown in fig. 2, the electronic device 100 includes the inertia measurement unit 47 that detects the acceleration of the electronic device 100.
Thus, in the first example of the determination method, first, the inertial measurement unit 47 detects the acceleration of the electronic device 100 in a period of time from the first time ta at which the camera image of the previous frame (N-1) was captured to the second time tb at which the camera image of the new frame (N) was captured.
Next, the processor 103 determines whether the acquired new frame camera image is good or bad.
That is, more specifically, when the maximum value of the acceleration of the electronic device 100 detected by the inertia measurement unit 47 is smaller than the preset first threshold value, the processor 103 determines that the new frame camera image (N) is good.
On the other hand, when the maximum value of the acceleration of the electronic device 100 detected by the inertia measurement unit 47 is equal to or greater than the first threshold value, the processor 103 determines that the new frame camera image (N) is bad.
In addition, it should be noted that when the new frame (N) camera image is a camera image of a frame acquired first by photographing the subject 101, the processor 103 determines that the new frame (N) camera image is good regardless of the maximum value of the acceleration of the electronic apparatus 100 detected by the inertia measurement unit 47.
In addition, it should be noted that the maximum value of the acceleration is an absolute value. In this determination method, only the amount of acceleration change is focused, and the direction of acceleration change is not focused.
In addition, it should be noted that the total number of pixels of the previous frame (N-1) camera image is the same as the total number of pixels of the new frame (N) camera image.
Here, in the above-described determination method, more specifically, the inertial measurement unit 47 may detect accelerations of the electronic device 100 in the x-axis direction, the y-axis direction, and the z-axis direction, respectively.
In this case, when the maximum value of all accelerations of the electronic device in the x-axis direction, the y-axis direction, and the z-axis direction detected by the inertia measurement unit 47 is smaller than the first threshold value, the processor 103 determines that a new frame (N) camera image is good.
On the other hand, when the maximum value of the acceleration of the electronic device 100 in any one of the x-axis direction, the y-axis direction, and the z-axis direction detected by the inertia measurement unit 47 is equal to or greater than the first threshold value, the processor 103 determines that the new frame (N) camera image is bad.
[ second example of determination method ]
Alternatively, in the example shown in fig. 4 described above, the imaging module 102 includes the additional camera module 50, and the additional camera module 50 captures the subject 101 and acquires a differential image having pixel values corresponding to a change in the captured image over time.
In a second example of the determination method, the additional camera module 50 acquires a differential image during a period from a first time ta at which a camera image of a previous frame (N-1) was captured to a second time tb at which a camera image of a new frame (N) was captured.
Then, the processor 103 acquires the sum of the maximum values of each pixel of the differential image acquired by the additional camera module 50.
When the total value of the maximum values of all pixels of the differential image is smaller than a preset second threshold value (when the fluctuation of the image is small), the processor 103 determines that the new frame (N) camera image is good.
On the other hand, when the total value of the maximum values of all the pixels of the differential image is equal to or greater than the second threshold (when the fluctuation of the image is large), the processor 103 determines that the new frame (N) camera image is bad.
Further, it should be noted that when the new frame (N) camera image is a camera image of a frame acquired first by capturing the subject 101, the processor 103 determines that the new frame (N) camera image is good regardless of the total value of the maximum values of all the pixels of the differential image.
In addition, it should be noted that the maximum value of each pixel of the differential image is an absolute value. In this determination method, only the amount of change in the captured image is focused, and the direction of change in the captured image is not focused.
[ third example of determination method ]
Alternatively, the difference between the pixel value of the pixel of the previous frame (N-1) camera image and the pixel value of the pixel of the new frame (N) camera image may be used to determine whether the frame is good or bad.
In a third example of the determination method, first, the imaging module 102 acquires a previous frame (N-1) camera image by capturing the subject 101 at a first time ta.
After that, the imaging module 102 acquires a new frame (N) camera image by photographing the subject 101 at the second time tb.
Next, the processor 103 determines whether the acquired new frame (N) camera image is good or bad.
More specifically, for each corresponding pixel, the processor 103 obtains the difference between the pixel value of the pixel of the previous frame (N-1) camera image and the pixel value of the corresponding pixel of the new frame (N) camera image.
Next, the processor 103 acquires the total value of the differences of all pixels of the camera image.
Next, when the total value of the differences of all pixels is smaller than a preset third threshold, the processor 103 determines that the new frame (N) camera image is good.
On the other hand, when the total value of the differences of all pixels is equal to or greater than the third threshold, the processor 103 determines that the new frame (N) camera image is bad.
In addition, it should be noted that when the new frame (N) camera image is a camera image of a frame acquired first by photographing the subject 100, the processor 103 determines that the new frame (N) camera image is good regardless of the total value of the differences of all pixels.
Further, it should be noted that the difference is an absolute value. In this determination method, only the amount of change in the difference is focused, and the direction of change in the difference is not focused.
Next, an example of a relationship between the determination result of the plurality of frames determined by the above-described determination method and frame image processing of the plurality of frames will be described below.
Fig. 7 is a diagram showing a relationship between a plurality of frames in which the processing shown in fig. 5 is sequentially executed and a determination result. Fig. 8 is a diagram showing a specific example of performing the processing shown in fig. 5 on the frame (m+1), the frame (m+2), and the frame (m+3) shown in fig. 7.
In fig. 7 and 8, frame (M), frame (m+1), frame (m+2), and frame (m+3) represent arbitrary consecutive frames.
As shown in fig. 7 and 8, the frame (m+2) is determined as a bad frame, and in the processing of the frame (m+2), the processing of calculating the depth is skipped. Then, the depth image of the frame (m+1) preceding the frame (m+2) is used as the depth image of the frame (m+1).
In this way, for a bad new frame, some processing is skipped and the depth image of the previous frame is used. Therefore, calculating the depth image of the bad new frame is skipped. However, the input image of this bad new frame is used as it is. Therefore, no frame loss occurs.
As described above, according to the electronic apparatus of the present application, by skipping some processing for a bad frame, the computing power of the processor (chip) and the margin of the allowable temperature can be obtained. Thus, when a frame is input, the processing of the processor can be performed at a higher speed.
In describing embodiments of the present disclosure, it should be understood that terms such as "center," "longitudinal," "transverse," "length," "width," "thickness," "upper," "lower," "front," "rear," "back," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," and "counterclockwise" should be construed to refer to the directions or locations depicted or shown in the drawings in question. These relative terms are only used to simplify the description of the present disclosure and do not denote or imply that the referenced devices or elements must have a particular orientation, or must be constructed or operated in a particular orientation. Accordingly, these terms should not be construed as limiting the present disclosure.
Furthermore, terms such as "first" and "second" are used herein for descriptive purposes and are not intended to indicate or imply relative importance or significance or the number of technical features indicated. Thus, features defined as "first" and "second" may include one or more of the features. In the description of the present disclosure, "a plurality" means "two or more" unless otherwise indicated.
In the description of embodiments of the present disclosure, unless specified or limited otherwise, the terms "mounted," "connected," "coupled," and the like are used broadly and may be, for example, a fixed connection, a removable connection, or an integral connection; or may be mechanically or electrically connected; or may be directly connected or indirectly connected through an intermediate structure; internal communication of two elements as would be understood by one of skill in the art depending on the particular situation is also possible.
In embodiments of the present disclosure, unless specified or limited otherwise, structures with a first feature "on" or "under" a second feature may include embodiments in which the first feature is in direct contact with the second feature, and may also include embodiments in which the first feature and the second feature are not in direct contact with each other, but are contacted by additional features formed therebetween. Furthermore, a first feature "over", "on" or "top" a second feature may include the following embodiments: the first feature being "above", "over" or "top" the second feature, orthogonally or obliquely, or simply meaning that the height of the first feature is higher than the height of the second feature; while "under", "under" or "bottom" of a first feature over a second feature may include the following embodiments: the first feature is "below", "beneath" or "bottom" the second feature, either orthogonally or obliquely, or simply means that the height of the first feature is lower than the height of the second feature.
Various embodiments and examples are provided in the above description to implement the different structures of the present disclosure. To simplify the present disclosure, certain elements and arrangements are described above. However, these elements and arrangements are merely examples and are not intended to limit the present disclosure. Further, in various examples of the present disclosure, reference numerals and/or letters may be repeated. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations. In addition, examples of different processes and materials are provided in this disclosure. However, those skilled in the art will appreciate that other processes and/or materials may also be applied.
Reference throughout this specification to "an embodiment," "some embodiments," "an example embodiment," "an example," "a particular example," or "some examples" means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above-identified phrases in various places throughout this specification are not necessarily all referring to the same embodiment or example of the disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in the flow diagrams or otherwise described herein may be understood as comprising one or more modules, code segments, or portions of code comprising executable instructions for implementing specific logical functions or steps in the process, and the scope of the preferred embodiments of the present disclosure includes other implementations, as will be understood by those skilled in the art that these functions may be implemented in an order other than that shown or discussed, including in substantially the same order or in reverse order.
The logic and/or steps described elsewhere herein or shown in a flowchart, for example, a particular sequence of executable instructions for implementing the logic function, may be embodied in or used in connection with any computer readable medium (e.g., a computer-based system, a system including a processor, or other systems capable of obtaining instructions from an instruction execution system, apparatus, or device executing the instructions) to be used by the instruction execution system, apparatus, or device. For the purposes of this description, a "computer-readable medium" can be any means that can adaptively comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples of the computer-readable medium include, but are not limited to: an electronic connection (electronic device) having one or more wires, a portable computer accessory (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a fiber optic device, and a portable compact disc read-only memory (CDROM). Furthermore, the computer readable medium may even be paper or other suitable medium upon which the program can be printed, as, for example, when the program is desired to be electronically captured, the paper or other suitable medium can be optically scanned, then compiled, decrypted or otherwise processed in a suitable manner, and then stored in a computer memory.
It should be understood that each portion of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, in another embodiment as well, these steps or methods may be implemented by one or a combination of the following techniques, which are known in the art: discrete logic circuits having logic gates for implementing logic functions for data signals, application specific integrated circuits having appropriately combined logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those skilled in the art will appreciate that all or part of the steps in the above-described exemplary methods of the present disclosure may be implemented by program instructions associated hardware. These programs may be stored in a computer readable storage medium and when run on a computer comprise one or a combination of steps in the method embodiments of the present disclosure.
Furthermore, the various functional units of the disclosed embodiments may be integrated in one processing module, or the units may be physically present alone, or two or more units are integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. When the integrated module is implemented in the form of a software functional module and sold or used as a stand-alone product, the integrated module may be stored in a computer-readable storage medium.
The storage medium may be a read-only memory, a magnetic disk, a CD, or the like.
Although embodiments of the present disclosure have been shown and described, it will be understood by those skilled in the art that these embodiments are illustrative and not to be construed as limiting the present disclosure, and that changes, modifications, substitutions, and alterations may be made to the embodiments without departing from the scope of the disclosure.

Claims (10)

1. An electronic device, comprising:
the imaging module is used for taking a picture of the shot object and acquiring a camera image; and
a processor for controlling the imaging module to acquire a camera image and a depth image, processing the acquired camera image based on the depth image, and outputting a processed camera image by processing the acquired camera image based on the depth image;
wherein, after acquiring a previous frame of camera image, the processor controls the imaging module to acquire a new frame of camera image, the new frame of camera image containing the image of the shot object; and
the processor determines whether the new frame camera image is good or bad; and is also provided with
Wherein when the processor determines that the new frame camera image is good, the processor calculates depth information corresponding to the new frame camera image, and acquires a depth image corresponding to the camera image based on the calculated depth information;
on the other hand, when the processor determines that the new frame camera image is bad, the processor acquires the depth image of the new frame camera image based on the depth image corresponding to the previous frame camera image without calculating depth information corresponding to the new frame camera image.
2. The electronic device of claim 1, wherein the image processing of the camera image comprises: a foreground image is generated on the camera image.
3. The electronic device of claim 1, further comprising: an inertial measurement unit for detecting acceleration of the electronic device;
the inertial measurement unit detects acceleration of the electronic device in a time period from a first time to a second time, wherein the first time is a time for capturing the previous frame of camera image, and the second time is a time for capturing the new frame of camera image; and is also provided with
Wherein the processor determines that the new frame camera image is good when the maximum value of the acceleration of the electronic device detected by the inertia measurement unit is less than a preset first threshold value;
on the other hand, when the maximum value of the acceleration of the electronic device detected by the inertia measurement unit is equal to or greater than the first threshold value, the processor determines that the new frame camera image is bad.
4. An electronic device according to claim 3,
the inertial measurement unit detects accelerations of the electronic device in an x-axis direction, a y-axis direction and a z-axis direction respectively; and is also provided with
Wherein the processor determines that the new frame camera image is good when a maximum value of all accelerations of the electronic device in the x-axis direction, the y-axis direction, and the z-axis direction detected by the inertial measurement unit is less than the first threshold;
on the other hand, when the maximum value of the acceleration of the electronic device in any one of the x-axis direction, the y-axis direction, and the z-axis direction detected by the inertia measurement unit is equal to or greater than the first threshold value, the processor determines that the new frame camera image is bad.
5. The electronic device according to claim 1,
wherein the imaging module further includes an additional camera module for capturing the subject and acquiring a differential image having pixel values according to a change of the captured image with time;
wherein the additional camera module acquires the differential image during a period from a first time when the previous frame of camera image is captured to a second time when the new frame of camera image is captured;
wherein the processor obtains a sum of maximum values of each pixel of the differential image obtained by the additional camera module; and
when the total value of the maximum values of all pixels of the differential image is smaller than a preset second threshold value, the processor determines that the new frame camera image is good;
on the other hand, when the total value of the maximum values of all pixels of the differential image is equal to or greater than the second threshold value, the processor determines that the new frame camera image is bad.
6. The electronic device of claim 1, wherein the imaging module captures the subject and acquires the previous frame of camera image at the first time, and thereafter the imaging module captures the subject and acquires the new frame of camera image at the second time;
wherein, for each pixel, the processor obtains a difference between a pixel value of the pixel of the previous frame camera image and a pixel value of a corresponding pixel of the new frame camera image;
wherein the processor obtains a sum of differences for all pixels of the camera image; and is also provided with
Wherein when the total value of the differences of all the pixels is smaller than a preset third threshold value, the processor determines that the new frame camera image is good;
on the other hand, when the total value of the differences of all the pixels is equal to or greater than the third threshold value, the processor determines that the new frame camera image is bad.
7. The electronic device according to claim 1,
wherein, imaging module includes:
a first camera module for capturing the subject and acquiring a first camera image; and
a second camera module for capturing the subject and acquiring a second camera image; and
wherein the processor obtains the camera image based on the first camera image and the second camera image; and is also provided with
The processor calculates depth information corresponding to the camera image and acquires a depth image corresponding to the camera image based on the calculated depth information.
8. The electronic device of claim 1, wherein the imaging module comprises a first camera module to capture the subject and acquire the camera image; and
wherein the processor calculates depth information corresponding to the camera image and acquires a depth image corresponding to the camera image based on the calculated depth information.
9. A method for controlling an electronic device, the electronic device comprising: an imaging module for taking a photograph of a subject and acquiring a camera image; and a processor for controlling the imaging module to acquire a camera image and a depth image, processing the acquired camera image based on the depth image, and outputting a processed camera image by processing the acquired camera image based on the depth image;
the method comprises the following steps:
after the previous frame of camera image is acquired, controlling the imaging module to acquire a new frame of camera image by the processor, wherein the new frame of camera image comprises the image of the shot object; and
determining, by the processor, whether the new frame camera image is good or bad, and
wherein the processor calculates depth information corresponding to the new frame camera image, and when the processor determines that the new frame camera image is good, the processor acquires a depth image corresponding to the camera image based on the calculated depth information;
on the other hand, when the processor determines that the new frame camera image is bad, the processor acquires the depth image of the new frame camera image based on the depth image corresponding to the previous frame camera image without calculating depth information corresponding to the new frame camera image.
10. A computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements a method for controlling an electronic device, the electronic device comprising: an imaging module for taking a photograph of a subject and acquiring a camera image; and a processor for controlling the imaging module to acquire a camera image and a depth image, processing the acquired camera image based on the depth image, and outputting a processed camera image by processing the acquired camera image based on the depth image; and
the method comprises the following steps:
after the previous frame of camera image is acquired, controlling the imaging module to acquire a new frame of camera image by the processor, wherein the new frame of camera image comprises the image of the shot object; and
determining, by the processor, whether the new frame camera image is good or bad; and
wherein the processor calculates depth information corresponding to the new frame camera image, and when the processor determines that the new frame camera image is good, acquires a depth image corresponding to the camera image based on the calculated depth information;
on the other hand, when the processor determines that the new frame camera image is bad, the processor acquires the depth image of the new frame camera image based on the depth image corresponding to the previous frame camera image without calculating depth information corresponding to the new frame camera image.
CN202180084542.XA 2021-02-25 2021-02-25 Electronic device, method of controlling electronic device, and computer-readable storage medium Pending CN116686284A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/077968 WO2022178782A1 (en) 2021-02-25 2021-02-25 Electric device, method of controlling electric device, and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116686284A true CN116686284A (en) 2023-09-01

Family

ID=83047690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180084542.XA Pending CN116686284A (en) 2021-02-25 2021-02-25 Electronic device, method of controlling electronic device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN116686284A (en)
WO (1) WO2022178782A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110304706A1 (en) * 2010-06-09 2011-12-15 Border John N Video camera providing videos with perceived depth
US9781405B2 (en) * 2014-12-23 2017-10-03 Mems Drive, Inc. Three dimensional imaging with a single camera
US20170127039A1 (en) * 2015-11-02 2017-05-04 Mediatek Inc. Ultrasonic proximity detection system
KR20200117562A (en) * 2019-04-04 2020-10-14 삼성전자주식회사 Electronic device, method, and computer readable medium for providing bokeh effect in video
JP7431527B2 (en) * 2019-08-07 2024-02-15 キヤノン株式会社 Depth information generation device, imaging device, depth information generation method, image processing device, image processing method, and program

Also Published As

Publication number Publication date
WO2022178782A1 (en) 2022-09-01

Similar Documents

Publication Publication Date Title
CN109842753B (en) Camera anti-shake system, camera anti-shake method, electronic device and storage medium
JP6700872B2 (en) Image blur correction apparatus and control method thereof, image pickup apparatus, program, storage medium
US9413923B2 (en) Imaging apparatus
US7903144B2 (en) Electric hand-vibration correction method, electric hand-vibration correction device, electric hand-vibration correction program, and imaging apparatus
JP4963384B2 (en) Method and system for determining motion of an imaging device
CN109903324B (en) Depth image acquisition method and device
US20120300115A1 (en) Image sensing device
US20120169840A1 (en) Image Processing Device and Method, and Program
US8965105B2 (en) Image processing device and method
KR101642569B1 (en) Digital photographing System and Controlling method thereof
US8780184B2 (en) Image pickup apparatus
US20150288949A1 (en) Image generating apparatus, imaging apparatus, and image generating method
US20120162453A1 (en) Image pickup apparatus
KR102592745B1 (en) Posture estimating apparatus, posture estimating method and computer program stored in recording medium
US20100002089A1 (en) Digital photographing apparatuses for controlling hand shake correction and methods of controlling the digital photographing apparatus
JP6429633B2 (en) Image blur correction apparatus, control method, optical apparatus, imaging apparatus
CN116686284A (en) Electronic device, method of controlling electronic device, and computer-readable storage medium
JP2013201688A (en) Image processing apparatus, image processing method, and image processing program
US20120188343A1 (en) Imaging apparatus
CN106454066B (en) Image processing apparatus and control method thereof
CN116711296A (en) Electronic device, method of controlling electronic device, and computer-readable storage medium
US9076215B2 (en) Arithmetic processing device
WO2022246752A1 (en) Method of generating an image, electronic device, apparatus, and computer readable storage medium
JP5531726B2 (en) Camera and image processing method
WO2024055290A1 (en) Method of detecting flicker area in captured image, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination