CN109544616B - Depth information determination method and terminal - Google Patents

Depth information determination method and terminal Download PDF

Info

Publication number
CN109544616B
CN109544616B CN201811509856.6A CN201811509856A CN109544616B CN 109544616 B CN109544616 B CN 109544616B CN 201811509856 A CN201811509856 A CN 201811509856A CN 109544616 B CN109544616 B CN 109544616B
Authority
CN
China
Prior art keywords
depth information
region
effective
determining
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811509856.6A
Other languages
Chinese (zh)
Other versions
CN109544616A (en
Inventor
陈宇灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811509856.6A priority Critical patent/CN109544616B/en
Publication of CN109544616A publication Critical patent/CN109544616A/en
Application granted granted Critical
Publication of CN109544616B publication Critical patent/CN109544616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a depth information determining method and a terminal, wherein the method comprises the following steps: acquiring first depth information of a shot object through the TOF camera, and acquiring second depth information of the shot object through the TOF camera and the color camera; effective depth information of the subject is determined from the first depth information and the second depth information. In this embodiment, the performance of the terminal for acquiring the depth information is better.

Description

Depth information determination method and terminal
Technical Field
The present invention relates to the field of information acquisition technologies, and in particular, to a depth information determining method and a terminal.
Background
In the current society, more and more terminals are provided with camera devices, so that users can take pictures at any time and any place conveniently. In practical applications, the existing imaging apparatus generally employs a Time Of Flight (TOF) camera, and the TOF camera acquires depth information Of an object, and often only the TOF camera acquires the depth information Of the object, that is, the depth information acquired by the TOF camera is directly used as effective depth information. However, the accuracy of the depth information acquired by the TOF camera in some scenes may be relatively low, so that the performance of the terminal for acquiring the depth information is relatively poor.
Disclosure of Invention
The embodiment of the invention provides a depth information determining method and a terminal, and aims to solve the problem that the performance of the terminal for acquiring depth information is poor.
In a first aspect, an embodiment of the present invention provides a depth information determining method, which is applied to a terminal including a TOF camera and a color camera, where the TOF camera and the color camera are located on a same side of the terminal, and the method includes:
acquiring first depth information of a shot object through the TOF camera, and acquiring second depth information of the shot object through the TOF camera and the color camera;
effective depth information of the subject is determined from the first depth information and the second depth information.
In a second aspect, an embodiment of the present invention further provides a terminal, including a TOF camera and a color camera, where the TOF camera and the color camera are located on a same side of the terminal, and the terminal further includes:
the acquisition module is used for acquiring first depth information of a shot object through the TOF camera and acquiring second depth information of the shot object through the TOF camera and the color camera;
a determining module, configured to determine effective depth information of the object to be shot from the first depth information and the second depth information.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the above depth information determination method when executing the computer program.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the depth information determination method.
In the embodiment of the invention, first depth information of a shot object is obtained through the TOF camera, and second depth information of the shot object is obtained through the TOF camera and the color camera; effective depth information of the subject is determined from the first depth information and the second depth information. Therefore, the terminal can acquire the depth information of the shot object through the TOF camera, or acquire the depth information of the shot object through the TOF camera and the color camera together, so that the performance of acquiring the depth information by the terminal is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a depth information determining method according to an embodiment of the present invention;
fig. 2 is a flowchart of another depth information determining method according to an embodiment of the present invention;
fig. 3 is an application scene diagram of another depth information determining method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another terminal provided in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of another terminal according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a depth information determining method provided in an embodiment of the present invention, where the method is applied to a terminal including a TOF camera and a color camera, where the TOF camera and the color camera are located on the same side of the terminal, and as shown in fig. 1, the method includes the following steps:
step 101, acquiring first depth information of a shot object through the TOF camera, and acquiring second depth information of the shot object through the TOF camera and the color camera.
The object to be shot may be all objects included in an image or video acquired by the terminal through the TOF camera, or through the TOF camera and the color camera, for example: when the terminal acquires one image, the shot object can be all objects such as a person and a shooting background included in the image. The image may be an image in a preview interface on the terminal, or an image captured by the terminal through the camera. The depth information of the subject may be referred to as image depth information.
In addition, the terminal can calculate the depth information of the shot object by measuring the time interval between the light emitted by the infrared laser emitter and the light received by the infrared laser receiver through the TOF camera, multiplying the time interval by the light speed and then dividing by 2.
In addition, the number of color cameras included in the terminal is not limited herein. For example: the number of the color cameras may be one or more. For example: the terminal can comprise 1 TOF camera and 1 color camera; or, the terminal may also include 1 TOF camera and 2 color cameras. It should be noted that the color camera may be a Red Green Blue (RGB) camera.
When the number of the color cameras is multiple, the terminal can collect images for the shot object through the TOF camera and the color cameras, and effective depth information of the shot object is determined according to the collected images.
For example: the TOF camera and each color camera can respectively acquire images aiming at the shot object, so that a plurality of images of the shot object can be obtained, and the position of the TOF camera and the position of each color camera on the terminal are fixed, so that the position relation between the TOF camera and each color camera and the shot object can be known, and thus, the terminal can determine the effective depth information of the shot object through the difference among the plurality of images and the position relation between the TOF camera and each color camera and the shot object.
And 102, determining effective depth information of the shot object from the first depth information and the second depth information.
Whether the difference between the first depth information and the second depth information is smaller than the target difference or not can be judged, and when the difference is smaller than the target difference, the accuracy of the first depth information and the accuracy of the second depth information can be compared, so that the effective depth information of the shot object can be determined in the first depth information and the second depth information. For example: the difference value of the first depth information and the second depth information is 0.1 unit or 0.2 unit, and the value of the target difference value is 1 unit, the accuracy of the first depth information and the second depth information can be compared, if the first depth information is accurate to 1 bit behind the decimal point, and the second depth information is accurate to 2 bits behind the decimal point, the second depth information can be determined to be the effective depth information of the shot object. The units can be determined according to actual needs.
Through the steps, first depth information of the shot object is obtained through the TOF camera, second depth information of the shot object is obtained through the TOF camera and the color camera, then screening is carried out on the first depth information and the second depth information, and the screened depth information is determined to be effective depth information of the shot object. Therefore, compared with the mode that the TOF camera can only be adopted to acquire the effective depth information of the shot object in the prior art, the mode that the terminal acquires the effective depth information of the shot object is more, and the performance of acquiring the depth information by the terminal is better.
In the embodiment of the present invention, the terminal may be a Mobile terminal or a non-Mobile terminal, and the Mobile terminal may be a Mobile phone, a Tablet personal Computer (Tablet personal Computer), a Laptop Computer (Laptop Computer), a personal digital Assistant (PDA for short), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
In the embodiment of the invention, first depth information of a shot object is obtained through the TOF camera, and second depth information of the shot object is obtained through the TOF camera and the color camera; effective depth information of the subject is determined from the first depth information and the second depth information. Therefore, the terminal can acquire the depth information of the shot object through the TOF camera respectively, or acquire the depth information of the shot object through the TOF camera and the color camera together, so that the performance of acquiring the depth information by the terminal is better.
Referring to fig. 2, fig. 2 is a flowchart of another depth information determining method according to an embodiment of the present invention. The main differences between this embodiment and the previous embodiment are: the photographed object includes a first area object and a second area object, and it is possible to determine depth information of the first area object in the first depth information as effective depth information of the first area object and determine depth information of the second area object in the second depth information as effective depth information of the second area object. As shown in fig. 2, the method comprises the following steps:
step 201, acquiring first depth information of a shot object through the TOF camera, and acquiring second depth information of the shot object through the TOF camera and the color camera.
It should be noted that the number of color cameras on the terminal is not limited herein.
Step 202, determining that the depth information of the first region object in the first depth information is effective depth information of the first region object, and determining that the depth information of the second region object in the second depth information is effective depth information of the second region object; or, determining that the first depth information is effective depth information of the shot object.
In the second depth information, the depth information of the first area object may be smaller than the depth information of the second area object, and types of the first area object and the second area object are not limited herein, for example: the type of the first region object may be a person, and the type of the second region object may be a tree.
For example: referring to fig. 3, a TOF camera 3011 and a color camera 3012 are disposed on the terminal 301, the terminal 301 can acquire first depth information of the object 302 through the TOF camera 3011, and can acquire second depth information of the object 302 through the TOF camera 3011 and the color camera 3012, and the object 302 includes a first region object 3021 and a second region object 3022. In addition, the arrow direction in fig. 3 indicates the transmission direction of infrared light between the terminal 301 and the object 302, for example: infrared light is emitted from an infrared emitter in the TOF camera 3011, and is reflected back to an infrared receiver or color camera 3012 of the TOF camera 3011 after encountering the object 302 to be photographed.
In this way, the terminal 301 can determine that the depth information of the first region object 3021 in the first depth information is the effective depth information of the first region object 3021, and determine that the depth information of the second region object 3022 in the second depth information is the effective depth information of the second region object 3022; or, the first depth information is determined to be the effective depth information of the object 302, so that the terminal 301 can more flexibly determine the effective depth information of the object 302.
In addition, the effective depth information of the first region object may be: average depth information of a plurality of points included in the first area object, or a median of the depth information of the plurality of points, and the like. The effective depth information of the second region object may refer to the expression of the effective depth information of the first region object.
Optionally, the step of determining that the depth information of the first region object in the first depth information is effective depth information of the first region object, and determining that the depth information of the second region object in the second depth information is effective depth information of the second region object includes:
if the difference value between the depth information of the first area object and the depth information of the second area object in the second depth information is greater than or equal to a preset threshold value, determining that the depth information of the first area object in the first depth information is effective depth information of the first area object, and determining that the depth information of the second area object in the second depth information is effective depth information of the second area object; alternatively, the first and second electrodes may be,
if the depth information of the first area object in the second depth information is less than or equal to a first preset value and the depth information of the second area object in the second depth information is greater than or equal to a second preset value, determining that the depth information of the first area object in the first depth information is effective depth information of the first area object, and determining that the depth information of the second area object in the second depth information is effective depth information of the second area object, wherein the first preset value is less than the second preset value.
When the difference value between the depth information of the first region object in the second depth information and the depth information of the second region object is greater than or equal to a preset threshold value, the depth information of the first region object in the second depth information is smaller than the depth information of the second region object, and in the first depth information acquired by the TOF camera, the depth information of the second region object is far smaller than the depth information of the second region object in the second depth information. If the region object is a background, the phenomenon that the accuracy of the depth information of the second region object obtained by the TOF camera is low may be referred to as a "background zooming-in phenomenon".
In this way, when a difference value between the depth information of the first region object in the second depth information and the depth information of the second region object is greater than or equal to a preset threshold, it may be determined that the depth information of the first region object in the first depth information is effective depth information of the first region object, and that the depth information of the second region object in the second depth information is effective depth information of the second region object.
Similarly, when the depth information of the first region object in the second depth information is less than or equal to the first preset value, and the depth information of the second region object in the second depth information is greater than or equal to the second preset value, the above-mentioned "background zooming-in phenomenon" may also occur. For example: the first preset value may be 20 cm and the second preset value may be 1 m. Of course, the specific values are not limited herein. Note that the depth information may be understood as a distance between the subject and the terminal.
In this embodiment, when the accuracy of the depth information of the object to be photographed obtained by the TOF camera is low, that is, when the "background zoom-in phenomenon" occurs, it may be determined that the depth information of the first region object in the first depth information is effective depth information of the first region object, and that the depth information of the second region object in the second depth information is effective depth information of the second region object, so that the accuracy of the depth information of the object to be photographed may be improved.
Optionally, the step of determining that the first depth information is effective depth information of the object to be shot includes:
if the difference value between the depth information of the first area object and the depth information of the second area object in the second depth information is smaller than a preset threshold value, determining that the first depth information is effective depth information of the shot object; alternatively, the first and second electrodes may be,
if the depth information of the first area object in the second depth information is greater than a first preset value, and/or the depth information of the second area object in the second depth information is less than a second preset value, determining that the first depth information is effective depth information of the shot object, wherein the first preset value is less than the second preset value.
When the difference value between the depth information of the first area object and the depth information of the second area object in the second depth information is smaller than a preset threshold value, it is indicated that the "background zooming-in phenomenon" does not occur in the first depth information acquired by the TOF camera, and thus, because the "background zooming-in phenomenon" does not occur, and when the effective depth information of the object to be shot is acquired by the TOF camera, the effective depth information can be acquired without visible light, so that the acquisition of the depth information is more convenient, and the acquisition mode of the effective depth information of the object to be shot is more flexible.
If the depth information of the first region object in the second depth information is greater than the first preset value, and/or the depth information of the second region object in the second depth information is less than the second preset value, it can be also stated that the "background zoom-in phenomenon" does not occur in the first depth information acquired by the TOF camera.
In addition, the expression of the preset threshold, the first preset value and the second preset value can be referred to the expression in the previous embodiment, and is not described herein again.
In this embodiment, when the "background zooming-in phenomenon" does not occur in the first depth information of the object to be photographed, which is obtained by the TOF camera, that is, when the accuracy of the first depth information is high, it may be determined that the first depth information is effective depth information of the object to be photographed, so that the accuracy of the effective depth information of the object to be photographed may be high.
Optionally, the filter of the color camera is a double-pass filter.
Wherein, when the intensity of the visible light is low, for example: at night or in a dark room, the terminal can receive the detection infrared light which penetrates through the double-pass filter through the color camera, and second depth information of the shot object is obtained according to the quantity of the detection infrared light. The detection infrared light is emitted by an infrared emitter in the TOF camera and returns to the infrared light in the color camera after encountering a shot object.
Because the infrared filter in the conventional color camera is completely cut off in the infrared light part, that is, the infrared light transmittance is low, and the filter of the color camera in the embodiment is a double-pass filter, the double-pass filter can allow the infrared light with the wave band of 830-1020 nm and the visible light with the wave band of 400-600 nm to pass through by changing the design of the film system, so as to complete the acquisition of the second depth information of the photographed object.
In addition, since the transmittance of the infrared light with the wavelength band of 830-1020 nm is low, generally lower than 2%, and the average transmittance is about 0.5%, while the transmittance of the visible light with the wavelength band of 400-600 nm is high, the transmittance can reach more than 90%. Therefore, when pictures are taken through the TOF camera and the color camera, the infrared light transmitted through the filter of the color camera has low quality of image formation on the pictures and can be basically ignored.
In this embodiment, the filter of the color camera is a two-way filter, so that when the visible light intensity is low, the TOF camera and the color camera can also obtain the second depth information of the shot object better, and the second depth information of the shot object can be obtained more conveniently.
In the embodiment of the present invention, through steps 201 and 202, the manner of determining the effective depth information of the photographed object is more flexible, so that the performance of the terminal for acquiring the depth information is better.
Referring to fig. 4, fig. 4 is a structural diagram of a terminal according to an embodiment of the present invention, which can implement details of a depth information determining method according to the foregoing embodiment and achieve the same effect. As shown in fig. 4, the terminal 400 includes a terminal with a TOF camera and a color camera, where the TOF camera and the color camera are located on the same side of the terminal, and the terminal 400 includes:
an obtaining module 401, configured to obtain first depth information of a captured object through the TOF camera, and obtain second depth information of the captured object through the TOF camera and the color camera;
a determining module 402, configured to determine effective depth information of the captured object from the first depth information and the second depth information.
Optionally, referring to fig. 5, the photographed object includes a first region object and a second region object, and the determining module 402 includes:
the first determining sub-module 4021 is configured to determine that the depth information of the first area object in the first depth information is effective depth information of the first area object, and determine that the depth information of the second area object in the second depth information is effective depth information of the second area object, where the first preset value is smaller than the second preset value.
Optionally, referring to fig. 6, the determining module 402 includes: the second determining sub-module 4022 is configured to determine that the first depth information is effective depth information of the object to be photographed.
Optionally, the first determining sub-module 4021 is further configured to determine, if a difference between the depth information of the first area object and the depth information of the second area object in the second depth information is greater than or equal to a preset threshold, that the depth information of the first area object in the first depth information is effective depth information of the first area object, and that the depth information of the second area object in the second depth information is effective depth information of the second area object; alternatively, the first and second electrodes may be,
the first determining sub-module 4021 is further configured to determine, if the depth information of the first area object in the second depth information is less than or equal to a first preset value and the depth information of the second area object in the second depth information is greater than or equal to a second preset value, that the depth information of the first area object in the first depth information is effective depth information of the first area object, and that the depth information of the second area object in the second depth information is effective depth information of the second area object, where the first preset value is less than the second preset value.
Optionally, the second determining sub-module 4022 is further configured to determine that the first depth information is effective depth information of the object to be shot, if a difference between the depth information of the first area object and the depth information of the second area object in the second depth information is smaller than a preset threshold; alternatively, the first and second electrodes may be,
the second determining sub-module 4022 is further configured to determine that the first depth information is effective depth information of the photographed object if the depth information of the first area object in the second depth information is greater than a first preset value, and/or the depth information of the second area object in the second depth information is less than a second preset value.
Optionally, the filter of the color camera is a double-pass filter.
The terminal provided by the embodiment of the present invention can implement each process implemented by the terminal in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition. In this embodiment, the performance of the terminal for acquiring the depth information is also better.
Fig. 7 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 7 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 710 configured to:
acquiring first depth information of a shot object through a TOF camera, and acquiring second depth information of the shot object through the TOF camera and a color camera;
effective depth information of the subject is determined from the first depth information and the second depth information.
Optionally, the object to be shot includes a first region object and a second region object, and the step of determining the effective depth information of the object to be shot from the first depth information and the second depth information, which is performed by the processor 710, includes:
determining that the depth information of the first region object in the first depth information is effective depth information of the first region object, and determining that the depth information of the second region object in the second depth information is effective depth information of the second region object; alternatively, the first and second electrodes may be,
and determining that the first depth information is effective depth information of the shot object, wherein the first preset value is smaller than the second preset value.
Optionally, the step of determining that the depth information of the first region object in the first depth information is effective depth information of the first region object, and determining that the depth information of the second region object in the second depth information is effective depth information of the second region object, which is executed by the processor 710, includes:
if the difference value between the depth information of the first area object and the depth information of the second area object in the second depth information is greater than or equal to a preset threshold value, determining that the depth information of the first area object in the first depth information is effective depth information of the first area object, and determining that the depth information of the second area object in the second depth information is effective depth information of the second area object; alternatively, the first and second electrodes may be,
if the depth information of the first area object in the second depth information is less than or equal to a first preset value and the depth information of the second area object in the second depth information is greater than or equal to a second preset value, determining that the depth information of the first area object in the first depth information is effective depth information of the first area object, and determining that the depth information of the second area object in the second depth information is effective depth information of the second area object, wherein the first preset value is less than the second preset value.
Optionally, the processor 710 performs the step of determining that the first depth information is effective depth information of the photographed object, including:
if the difference value between the depth information of the first area object and the depth information of the second area object in the second depth information is smaller than a preset threshold value, determining that the first depth information is effective depth information of the shot object; alternatively, the first and second electrodes may be,
and if the depth information of the first area object in the second depth information is greater than a first preset value and/or the depth information of the second area object in the second depth information is less than a second preset value, determining that the first depth information is effective depth information of the shot object.
Optionally, the filter of the color camera is a double-pass filter.
The mobile terminal in the embodiment of the invention can also acquire the depth information of the shot object through the TOF camera, or acquire the depth information of the shot object through the TOF camera and the color camera together, so that the performance of acquiring the depth information is better.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access via the network module 702, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the mobile terminal 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The mobile terminal 700 also includes at least one sensor 705, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the mobile terminal 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 710, receives a command from the processor 710, and executes the command. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 708 is an interface through which an external device is connected to the mobile terminal 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 700 or may be used to transmit data between the mobile terminal 700 and external devices.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby integrally monitoring the mobile terminal. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The mobile terminal 700 may also include a power supply 711 (e.g., a battery) for powering the various components, and the power supply 711 may be logically coupled to the processor 710 via a power management system that may enable managing charging, discharging, and power consumption by the power management system.
In addition, the mobile terminal 700 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program is executed by the processor 710 to implement each process of the above depth information determining method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned depth information determining method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A depth information determination method is applied to a terminal comprising a time of flight (TOF) camera and a color camera, wherein the TOF camera and the color camera are positioned on the same side of the terminal, and the method comprises the following steps:
acquiring first depth information of a shot object through the TOF camera, and acquiring second depth information of the shot object through the TOF camera and the color camera;
determining effective depth information of the subject from the first depth information and the second depth information;
the subject includes a first region subject and a second region subject, and the step of determining effective depth information of the subject from the first depth information and the second depth information includes:
determining that the depth information of the first region object in the first depth information is effective depth information of the first region object, and determining that the depth information of the second region object in the second depth information is effective depth information of the second region object; alternatively, the first and second electrodes may be,
determining that the first depth information is effective depth information of the shot object;
the acquiring, by the TOF camera, first depth information of a photographed object, and acquiring, by the TOF camera and the color camera, second depth information of the photographed object, include:
acquiring first depth information of the first region object and the second region object through the TOF camera; and shooting the second area object through the TOF camera, and shooting the second area object through the color camera to acquire second depth information of the shot object.
2. The method of claim 1, wherein the determining that the depth information of the first region object in the first depth information is effective depth information of the first region object, and the determining that the depth information of the second region object in the second depth information is effective depth information of the second region object comprises:
if the difference value between the depth information of the first area object and the depth information of the second area object in the second depth information is greater than or equal to a preset threshold value, determining that the depth information of the first area object in the first depth information is effective depth information of the first area object, and determining that the depth information of the second area object in the second depth information is effective depth information of the second area object; alternatively, the first and second electrodes may be,
if the depth information of the first area object in the second depth information is less than or equal to a first preset value and the depth information of the second area object in the second depth information is greater than or equal to a second preset value, determining that the depth information of the first area object in the first depth information is effective depth information of the first area object, and determining that the depth information of the second area object in the second depth information is effective depth information of the second area object, wherein the first preset value is less than the second preset value.
3. The method of claim 1, wherein the step of determining that the first depth information is effective depth information of the subject includes:
if the difference value between the depth information of the first area object and the depth information of the second area object in the second depth information is smaller than a preset threshold value, determining that the first depth information is effective depth information of the shot object; alternatively, the first and second electrodes may be,
if the depth information of the first area object in the second depth information is greater than a first preset value, and/or the depth information of the second area object in the second depth information is less than a second preset value, determining that the first depth information is effective depth information of the shot object, wherein the first preset value is less than the second preset value.
4. The method of any of claims 1-3, wherein the filter of the color camera is a double pass filter.
5. The terminal is characterized by comprising a TOF camera and a color camera, wherein the TOF camera and the color camera are positioned on the same side of the terminal, and the terminal further comprises:
the acquisition module is used for acquiring first depth information of a shot object through the TOF camera and acquiring second depth information of the shot object through the TOF camera and the color camera;
a determination module configured to determine effective depth information of the subject from the first depth information and the second depth information;
the subject includes a first region subject and a second region subject, and the determination module includes:
a first determining sub-module, configured to determine that depth information of the first region object in the first depth information is effective depth information of the first region object, and determine that depth information of the second region object in the second depth information is effective depth information of the second region object; alternatively, the first and second electrodes may be,
the second determining submodule is used for determining that the first depth information is effective depth information of the shot object;
the acquisition module is further configured to acquire first depth information of the first region object and the second region object through the TOF camera; and shooting the second area object through the TOF camera, and shooting the second area object through the color camera to acquire second depth information of the shot object.
6. The terminal of claim 5, wherein the first determining sub-module is further configured to determine, if a difference between the depth information of the first region object in the second depth information and the depth information of the second region object is greater than or equal to a preset threshold, that the depth information of the first region object in the first depth information is valid depth information of the first region object, and that the depth information of the second region object in the second depth information is valid depth information of the second region object; alternatively, the first and second electrodes may be,
the first determining sub-module is further configured to determine, if the depth information of the first area object in the second depth information is less than or equal to a first preset value and the depth information of the second area object in the second depth information is greater than or equal to a second preset value, that the depth information of the first area object in the first depth information is effective depth information of the first area object and that the depth information of the second area object in the second depth information is effective depth information of the second area object, where the first preset value is less than the second preset value.
7. The terminal of claim 5, wherein the second determining sub-module is further configured to determine that the first depth information is effective depth information of the photographed object if a difference between the depth information of the first area object and the depth information of the second area object in the second depth information is smaller than a preset threshold; alternatively, the first and second electrodes may be,
the second determining sub-module is further configured to determine that the first depth information is effective depth information of the object to be shot if the depth information of the first area object in the second depth information is greater than a first preset value and/or the depth information of the second area object in the second depth information is less than a second preset value, where the first preset value is less than the second preset value.
8. A terminal as claimed in any one of claims 5 to 7, characterised in that the filter of the colour camera is a double pass filter.
9. A mobile terminal, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the depth information determination method according to any of claims 1-4 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the depth information determination method according to any one of claims 1 to 4.
CN201811509856.6A 2018-12-11 2018-12-11 Depth information determination method and terminal Active CN109544616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811509856.6A CN109544616B (en) 2018-12-11 2018-12-11 Depth information determination method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811509856.6A CN109544616B (en) 2018-12-11 2018-12-11 Depth information determination method and terminal

Publications (2)

Publication Number Publication Date
CN109544616A CN109544616A (en) 2019-03-29
CN109544616B true CN109544616B (en) 2021-02-26

Family

ID=65854007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811509856.6A Active CN109544616B (en) 2018-12-11 2018-12-11 Depth information determination method and terminal

Country Status (1)

Country Link
CN (1) CN109544616B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114365191A (en) * 2019-11-06 2022-04-15 Oppo广东移动通信有限公司 Image depth value determination method, image processor and module

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201227159A (en) * 2010-12-24 2012-07-01 wen-jin Zhang Method of taking pictures for generating three-dimensional image data
US10607351B2 (en) * 2015-07-13 2020-03-31 Koninklijke Philips N.V. Method and apparatus for determining a depth map for an image
CN106612387B (en) * 2015-10-15 2019-05-21 杭州海康威视数字技术股份有限公司 A kind of combined depth figure preparation method and depth camera
US10116915B2 (en) * 2017-01-17 2018-10-30 Seiko Epson Corporation Cleaning of depth data by elimination of artifacts caused by shadows and parallax
CN107403447B (en) * 2017-07-14 2020-11-06 梅卡曼德(北京)机器人科技有限公司 Depth image acquisition method
CN108564613A (en) * 2018-04-12 2018-09-21 维沃移动通信有限公司 A kind of depth data acquisition methods and mobile terminal
CN108777784B (en) * 2018-06-06 2019-09-06 Oppo广东移动通信有限公司 Depth acquisition methods and device, electronic device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN109544616A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN109639970B (en) Shooting method and terminal equipment
CN107038681B (en) Image blurring method and device, computer readable storage medium and computer device
CN108495029B (en) Photographing method and mobile terminal
CN110365907B (en) Photographing method and device and electronic equipment
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN107846583B (en) Image shadow compensation method and mobile terminal
CN109361867B (en) Filter processing method and mobile terminal
CN108307106B (en) Image processing method and device and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN110213485B (en) Image processing method and terminal
CN108924414B (en) Shooting method and terminal equipment
CN108040209B (en) Shooting method and mobile terminal
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN111263071A (en) Shooting method and electronic equipment
CN109803110B (en) Image processing method, terminal equipment and server
CN109819166B (en) Image processing method and electronic equipment
CN109005314B (en) Image processing method and terminal
CN109246351B (en) Composition method and terminal equipment
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN107330867B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
CN110636225B (en) Photographing method and electronic equipment
CN109462727B (en) Filter adjusting method and mobile terminal
CN108259808B (en) Video frame compression method and mobile terminal
CN107817963B (en) Image display method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant