WO2020238569A1 - Control method and control device for terminal, terminal, and computer readable storage medium - Google Patents

Control method and control device for terminal, terminal, and computer readable storage medium Download PDF

Info

Publication number
WO2020238569A1
WO2020238569A1 PCT/CN2020/088888 CN2020088888W WO2020238569A1 WO 2020238569 A1 WO2020238569 A1 WO 2020238569A1 CN 2020088888 W CN2020088888 W CN 2020088888W WO 2020238569 A1 WO2020238569 A1 WO 2020238569A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth information
image
terminal
current scene
depth
Prior art date
Application number
PCT/CN2020/088888
Other languages
French (fr)
Chinese (zh)
Inventor
王路
吕向楠
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020238569A1 publication Critical patent/WO2020238569A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Definitions

  • This application relates to the field of three-dimensional imaging technology, and more specifically, to a terminal control method, a terminal control device, a terminal, and a computer-readable storage medium.
  • the depth camera can further obtain the depth information of the objects in the scene by projecting laser light into the scene and receiving the laser light reflected by the objects in the scene.
  • the power and specifications of the laser are designed in accordance with safety standards. And the safety margin of the skin is relatively abundant.
  • the embodiments of the present application provide a terminal control method, a terminal control device, a terminal, and a computer-readable storage medium.
  • the control method of the terminal of the embodiment of the present application is used for a terminal, the terminal includes a depth camera, the depth camera includes a light transmitter and a light receiver, and the control method includes: controlling the light transmitter to emit a predetermined light to the current scene Frame number of test laser; control the light receiver to receive the test laser reflected by the current scene; obtain the depth information of the current scene according to the received test laser; determine whether there is a depth less than the preset safety distance in the depth information ; And if yes, control the terminal to enter a safe mode.
  • the control device of the terminal of the embodiment of this application is used in a terminal, the terminal includes a depth camera, the depth camera includes a light transmitter and a light receiver, and the control device includes a first control module, a second control module, and an acquisition module , A first judgment module and a third control module, the first control module is used to control the light transmitter to emit a predetermined number of test lasers to the current scene; the second control module is used to control the light receiver Receive the test laser reflected by the current scene; the acquisition module is used to acquire the depth information of the current scene according to the received test laser; the first determination module is used to determine whether the depth information is less than a preset safety distance The depth; the third control module is used to control the terminal to enter a safe mode if there is a depth less than a preset safe distance in the depth information.
  • the terminal of the embodiment of the present application includes a depth camera and a processor, the depth camera includes a light transmitter and a light receiver, and the processor is configured to: control the light transmitter to emit a predetermined number of test lasers to the current scene; control The light receiver receives the test laser reflected by the current scene; obtains the depth information of the current scene according to the received test laser; determines whether there is a depth less than a preset safety distance in the depth information; and if so, controls the The terminal enters the safe mode.
  • the optical transmitter is controlled to emit the test laser
  • the optical receiver receives the reflected test laser
  • the reflected test laser First obtain the depth information, and judge whether there is a depth less than the preset safe distance based on the depth information. If it does, it is judged that if the current laser is irradiated on the user's body, such as the eyes, it will easily cause harm to the user, and further control the terminal entry Safe mode, so that when users are closer, the security of using the terminal is also higher.
  • FIG. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a system architecture of a terminal according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a terminal control method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of modules of a terminal control device according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of laser pulses emitted by a terminal according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a terminal control method according to an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a terminal control method according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of modules of a terminal control device according to an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a scene of depth information acquired by a terminal according to an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a terminal control method according to an embodiment of the present application.
  • FIG. 11 is a schematic diagram of modules of a terminal control device according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a scene of a terminal control method according to an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a terminal control method according to an embodiment of the present application.
  • FIG. 14 is a schematic diagram of modules of a terminal control device according to an embodiment of the present application.
  • 15 is a schematic diagram of the principle of acquiring depth information in a set mode by a terminal according to an embodiment of the present application.
  • FIG. 16 is a schematic diagram of interaction between a non-volatile computer-readable storage medium and a processor in an embodiment of the present application.
  • the “on” or “under” of the first feature on the second feature may be in direct contact with the first and second features, or indirectly through an intermediary. contact.
  • the "above”, “above” and “above” of the first feature on the second feature may mean that the first feature is directly above or obliquely above the second feature, or simply means that the level of the first feature is higher than the second feature.
  • the “below”, “below” and “below” of the second feature of the first feature may mean that the first feature is directly below or obliquely below the second feature, or it simply means that the level of the first feature is smaller than the second feature.
  • the terminal 10 of the embodiment of the present application includes a housing 15, a depth camera 11 and a processor 12.
  • the terminal 10 may be a terminal such as a mobile phone, a tablet computer, a notebook computer, a smart watch, etc.
  • the description of this application takes the terminal 10 as a mobile phone as an example for description. It is understood that the specific form of the terminal 10 is not limited to a mobile phone.
  • Both the depth camera 11 and the processor 12 can be installed on the housing 15.
  • the housing 15 includes a front 151 and a back 152, and the front 151 and the back 152 are opposite to each other.
  • the front 151 can also be used to install a display screen 14, which can be used to display images, text and other information.
  • the depth camera 11 can be installed on the front 151 to facilitate selfies or video calls, etc.; the depth camera 11 can also be installed on the back 152 to facilitate shooting scenes and others; in addition, it can also be installed on both the front 151 and the back 152. Independent working depth camera 11.
  • the depth camera 11 includes a light transmitter 111 and a light receiver 112.
  • the light transmitter 111 of the depth camera 11 can emit laser light, such as infrared laser, which is reflected after reaching the object in the scene.
  • the reflected laser light can be received by the light receiver 112, and the processor 12 can emit according to the light transmitter 111
  • the laser light and the laser light received by the light receiver 112 calculate the depth information of the object.
  • the depth camera 11 may obtain depth information through a time of flight (TOF) ranging method.
  • the depth camera 11 may obtain depth information through a structured light ranging principle. The description of this application takes the depth camera 11 to obtain depth information through the principle of structured light ranging as an example for description.
  • TOF time of flight
  • the depth camera 11 is installed on the back 152 of the housing 15. It can be understood that the depth camera 11 (that is, the rear depth camera 11) installed on the back 152 needs to meet the normal use of photographing distant objects. Therefore, usually the optical power of the laser light emitted by the light emitter 111 needs to be set to be larger. Satisfy the accuracy of obtaining depth information. However, the rear depth camera 11 is also required to be able to photograph close objects or people at the same time. When the distance is close, the laser with higher optical power is likely to cause harm to people. Therefore, for the rear depth camera 11, ensuring the safety of the depth camera 11 is particularly important and difficult.
  • the terminal 10 may further include a visible light camera 13.
  • the visible light camera 13 may include a telephoto camera and a wide-angle camera, or the visible light camera 13 may include a telephoto camera, a wide-angle camera, and a periscope camera.
  • the visible light camera 13 can be arranged close to the depth camera 11.
  • the visible light camera 13 can be arranged between the light emitter 111 and the light receiver 112, so that the light emitter 111 and the light receiver 112 have a longer distance, which improves The length of the baseline (baseline) of the depth camera 11 improves the accuracy of acquiring depth information.
  • both the optical transmitter 111 and the optical receiver 112 are connected to the processor 12.
  • the processor 12 may provide an enable signal for the optical transmitter 111.
  • the processor 12 may provide an enable signal for the driver 16, wherein the driver 16 is used to drive the optical transmitter 111 to emit laser light.
  • the optical receiver 112 is connected to the processor 12 through an I2C bus.
  • the optical receiver 112 can control the projection timing of the optical transmitter 111 through a strobe signal (strobe signal), where the strobe signal is based on the optical receiver 112
  • strobe signal can be regarded as an electrical signal with alternating high and low levels, and the light transmitter 111 projects laser light according to the laser projection timing indicated by the strobe signal.
  • the processor 12 can send an image acquisition instruction through the I2C bus to enable the depth camera 11 to work. After the optical receiver 112 receives the image acquisition instruction, it controls the switching device 17 through the strobe signal.
  • the switch device 17 sends a pulse signal (pwn) to the driver 16, and the driver 16 drives the light emitter 111 to project laser light into the scene according to the pulse signal. If the strobe signal is low, the switch device 17 stops sending the pulse signal to the driver 16. The light emitter 111 does not project laser light; or, when the strobe signal is low, the switching device 17 sends a pulse signal to the driver 16, and the driver 16 drives the light emitter 111 to project laser light into the scene according to the pulse signal. When the level is high, the switching device 17 stops sending pulse signals to the driver 16, and the light emitter 111 does not project laser light.
  • the strobe signal may not be used when the optical receiver 112 and the optical transmitter 111 cooperate.
  • the processor 12 sends an image acquisition command to the optical receiver 112 and simultaneously sends a laser projection command to the driver 16.
  • the receiver 112 starts to acquire the acquired image after receiving the image acquisition instruction, and when the driver 16 receives the laser projection instruction, it drives the light transmitter 111 to project laser light.
  • the light emitter 111 projects laser light
  • the laser light forms a laser pattern with spots and is projected on an object in the scene.
  • the light receiver 112 collects the laser pattern reflected by the object to obtain the speckle image, and sends the speckle image to the processor 12 through the Mobile Industry Processor Interface (MIPI).
  • MIPI Mobile Industry Processor Interface
  • the processor 12 may calculate the depth information according to the speckle image and the reference image pre-stored in the processor 12.
  • control method of the embodiment of the present application can be used to control the aforementioned terminal 10.
  • the control method includes the steps:
  • the control device 20 of the embodiment of the present application can be used to control the aforementioned terminal 10.
  • the control device 20 includes a first control module 21, a second control module 22, an acquisition module 23, a first judgment module 24, and The third control module 25.
  • the first control module 21 can be used to implement step 031; the second control module 22 can be used to implement step 032; the acquisition module 23 can be used to implement step 033; the first judgment module 24 can be used to implement step 034; the third control module 25 can be used to implement Step 035.
  • the first control module 21 can be used to control the light transmitter 111 to emit a predetermined number of test lasers to the current scene; the second control module 22 can be used to control the light receiver 112 to receive the test laser reflected by the current scene;
  • the module 23 can be used to obtain the depth information of the current scene according to the received test laser;
  • the first judging module 24 can be used to judge whether there is a depth less than the preset safety distance in the depth information;
  • the third control module 25 can be used to determine the depth information. There is a depth less than the preset safety distance, and the control terminal 10 enters the safety mode.
  • the processor 12 in the embodiment of the present application can be used to implement steps 031, 032, 033, and 034, that is, the processor 12 can be used to: control the light emitter 111 to transmit a preset to the current scene The number of frames of test laser; control the light receiver 112 to receive the test laser reflected by the current scene; obtain the depth information of the current scene according to the received test laser; determine whether there is a depth less than the preset safety distance in the depth information; and if so , The control terminal 10 enters the safe mode.
  • the processor 12 first controls the light emitter 111 to emit a predetermined number of test lasers to the current scene.
  • the predetermined number of frames may be one frame, and correspondingly, the processor 12 may send a pulse control signal to the optical transmitter 111; the predetermined number of frames may be multiple frames, and correspondingly, the processor 12 may send multiple pulses to the optical transmitter 111 control signal.
  • the optical power of the test laser can be set to be less than the optical power of the laser emitted by the optical transmitter 111 in normal use.
  • the test can be achieved by controlling the amplitude of the test laser to be small, and the duty ratio of the test laser to be small, etc.
  • the optical power of the laser is small.
  • the processor 12 controls the light receiver 112 to receive the test laser light reflected by the current scene.
  • the processor 12 can control the optical transmitter 111 and the optical receiver 112 to be turned on at the same time, that is, the processor 12 can implement steps 031 and 032 at the same time.
  • the laser light emitted by the light transmitter 111 has a specific pattern (such as a speckle pattern).
  • the laser light is reflected by the object and then received by the light receiver 112, and the light receiver 112 collects the laser light reflected by the object. After that, a speckle image is formed.
  • the processor 12 obtains the depth information of the current scene according to the received test laser.
  • a pre-calibrated reference image may be stored in the memory of the terminal 10, and the processor 12 processes the aforementioned speckle image and reference image to obtain a depth image of the current scene, where the depth image contains depth information.
  • the depth image includes multiple pixels, and the pixel value of each pixel is the depth of the current scene corresponding to the pixel.
  • the pixel value of a certain pixel is 20, and the certain pixel corresponds to point A in the scene.
  • the pixel value 20 means that the distance from the depth camera 11 to the point A is 20. It can be understood that the smaller the pixel value, the smaller the distance between the corresponding position of the current scene and the depth camera 11.
  • the processor 12 determines whether there is a depth less than a preset safety distance in the depth information.
  • the safety distance can be set according to relevant safety standards and user attributes, for example, according to the maximum laser energy that the user's eyes can withstand per unit time, according to the target user population of the terminal 10, and according to the target usage scenario of the terminal 10, etc. set.
  • the safety distance can be set to any distance such as 100 mm, 200 mm, 250 mm, 1000 mm, etc., and there is no restriction here.
  • the depth information may include the depths of multiple locations in the current scene, and the processor 12 may compare the depth of each location with the safety distance, and when the depth of at least one location is less than the safety distance, the position is determined Objects (such as people) are more susceptible to laser damage.
  • the processor 12 controls the terminal 10 to enter the safety mode when there is a depth less than the preset safety distance in the depth information.
  • the control terminal 10 enters the safe mode to ensure that the object in the current scene will not be injured.
  • the processor 12 controls the terminal 10 to enter the safe mode, which may be to control the terminal 10 to send a prompt signal (for example, control the display screen 14 to display a prompt window prompting the user to stay away, control the speaker of the terminal 10 to send out a prompt voice prompting the user to stay away, and control
  • the vibration motor of the terminal 10 sends out a vibration that prompts the user to stay away, etc.), which may be controlling the light emitter 111 to emit laser at a preset safe frequency, or controlling the light emitter 111 to emit one of the lasers at a preset safe amplitude.
  • a prompt signal for example, control the display screen 14 to display a prompt window prompting the user to stay away, control the speaker of the terminal 10 to send out a prompt voice prompting the user to stay away, and control
  • the vibration motor of the terminal 10 sends out a vibration that prompts the user to stay away, etc.
  • the light emitter 111 to emit laser at a preset safe frequency
  • controlling the light emitter 111 to emit
  • the processor 12 controls the waveform of the laser light emitted by the optical transmitter 111 by default as shown in L1, a high level indicates that the optical transmitter 111 is emitting laser light, and a low level indicates that the optical transmitter 111 is not emitting laser light.
  • L2 is the waveform of the laser light emitted by the optical transmitter 111 controlled by the processor 12 at a preset safe frequency.
  • the safe frequency may be smaller than the default frequency of the laser emitted by the optical transmitter 111, for example, the safe frequency is the default frequency. 1/2, 1/3, etc., so that the user's energy per unit time by laser irradiation is low and avoid harm to the user.
  • L4 in FIG. 5 is the waveform of the laser light emitted by the processor 12 controlling the optical transmitter 111 at a preset safe amplitude, where the safe amplitude may be smaller than the default amplitude of the laser emitted by the optical transmitter 111, for example, the safe amplitude It is 2/3, 1/2, 1/3, etc. of the default amplitude.
  • L4 in FIG. 5 is the waveform of the laser light emitted by the processor 12 controlling the optical transmitter 111 at a safe frequency and a safe amplitude. It can be understood that after changing the waveform of the laser, the depth camera 11 can still be used to obtain a depth image in the scene, which has little impact on the user experience.
  • the optical transmitter 111 is controlled to emit the test laser, and the optical receiver 112 receives the reflected test laser, and first obtains the depth according to the reflected test laser. According to the depth information, it is judged whether there is a depth less than the preset safety distance.
  • the terminal 10 is further controlled to enter the safe mode. So that when the user's use distance is relatively short, the security of using the terminal 10 is also high.
  • the use distance of the user is pre-detected by the depth camera 11, there is no need to add a distance detection device other than the depth camera 11 for pre-detection, which reduces the size and manufacturing cost of the terminal 10.
  • control method further includes steps:
  • control terminal 10 obtains the depth information of the current scene in the set mode.
  • 067 Determine whether there is a depth less than the preset safety distance in the depth information acquired in the set mode.
  • step 065 can also be implemented: controlling the terminal 10 to enter the safe mode.
  • the third control module 25 can also be used to implement step 066, and the first judgment module 24 can also be used to implement step 067.
  • the third control module 25 can also be used to control the terminal 10 to obtain the depth information of the current scene in a set mode if there is no depth less than the preset safety distance in the depth information; the first judgment module 24 can also be used To determine whether there is a depth less than the preset safety distance in the depth information acquired in the set mode.
  • the third control module 25 can also be used to implement step 065 when determining that the depth information acquired in the set mode has a depth less than the preset safety distance, that is, control the terminal 10 to enter the safety mode.
  • steps 061, 062, 063, 064, and 065 in FIG. 6 please refer to the description of steps 031, 032, 033, 034, and 035 in the specification of this application, which will not be repeated here.
  • the processor 12 may control the depth camera 11 to acquire the depth information of the current scene in a set mode.
  • the set mode may be the default working mode of the depth camera 11 of the terminal 10, and the set mode includes information such as the set waveform of the laser emitted by the light transmitter 111, such as the L1 waveform shown in FIG. 5.
  • control method further includes step 076: judging whether there are human eyes in the current scene according to the depth information.
  • step 074 is implemented.
  • control device 20 further includes a second judgment module 26, which can be used to implement step 076, that is, the second judgment module 26 can be used to The depth information determines whether there are human eyes in the current scene.
  • the first judging module 24 implements step 074.
  • the processor 12 may also be used to implement step 076, that is, the processor 12 may also be used to determine whether there are human eyes in the current scene according to the depth information. When judging that there are human eyes in the current scene, the processor 12 implements step 074.
  • steps 071, 072, 073, 074, and 075 in FIG. 7 please refer to the description of steps 031, 032, 033, 034, and 035 in the specification of this application, which will not be repeated here.
  • the human eye since the human eye’s ability to tolerate laser light is significantly lower than that of the skin on the rest of the human body, the human eye is often hurt first when the human eye is injured. Therefore, it can be judged whether there is a human eye in the current scene. When, judge whether the current use distance is less than the safety distance. In an example, if it is determined that there are no human eyes, the processor 12 may directly implement step 076 to improve the timeliness of obtaining depth information.
  • the depth information can be characterized by the pixel values of multiple pixels in the depth image, and the processor 12 can match the preset human eye model according to the distribution of the pixel values of the multiple pixels, such as the presence in the depth image. If the matching degree exceeds the predetermined threshold, it is determined that there are human eyes in the current scene. If there is no area in the depth image with the matching degree exceeding the predetermined threshold, then it is determined that there are no human eyes in the current scene.
  • the depth image I includes multiple pixels P, and the pixel value of each pixel P (such as 21, 22, 23, 24) represents the depth of the corresponding position of the pixel P.
  • the pixel value of each pixel P (such as 21, 22, 23, 24) represents the depth of the corresponding position of the pixel P.
  • the depth distribution of the object corresponding to the area D is roughly that the depth of the middle strip area is smaller, and the depth around the strip area gradually increases.
  • the depth distribution has a high degree of matching with the human eye model of the orthoscopic depth camera 11, so it is judged that there are human eyes in the current scene, and this area D corresponds to the position of the human eye in the current scene.
  • the processor 12 can also use the visible light image of the current scene acquired by the visible light camera 13 to jointly confirm whether there are human eyes in the current scene. Specifically, at the same time, it can judge whether there are people in the current scene by identifying the characteristic information in the visible light image. Eyes, when the presence of human eyes is recognized through both visible light images and depth information, it is determined that there are living human eyes in the current scene, and situations where there are only human eye photos or only human eye molds are excluded.
  • 01031 Obtain the first depth information of the current scene according to the received test laser of the previous frame;
  • 01032 Obtain the second depth information of the current scene according to the received test laser of the next frame.
  • 01033 Calculate the depth information of the current scene when the light emitter 111 emits the next frame of laser light according to the first depth information, the second depth information, the emission time of the previous frame of test laser light and the emission time of the next frame of test laser light.
  • the predetermined number of frames includes at least two frames
  • the acquisition module 23 includes a first acquisition unit 231, a second acquisition unit 232, and a first calculation unit 233.
  • the first obtaining unit 231 can be used to implement step 01033
  • the second obtaining unit 232 can be used to implement step 01032
  • the first calculating unit 233 can be used to implement step 01033.
  • the first acquisition unit 231 can be used to acquire the first depth information of the current scene according to the received test laser of the previous frame;
  • the second acquisition unit 232 can be used to acquire the current scene information according to the received test laser of the next frame Second depth information;
  • the first calculation unit 233 can be used to calculate the light emitter 111 to emit the next frame of laser light according to the first depth information, the second depth information, the emission time of the previous frame of test laser and the emission time of the next frame of test laser Time depth information of the current scene.
  • the predetermined number of frames includes at least two frames
  • the processor 12 may also be used to implement steps 01033, 01033, and 01033.
  • the processor 12 may be used to obtain the first depth information of the current scene according to the received test laser of the previous frame; obtain the second depth information of the current scene according to the received test laser of the next frame; and The depth information, the second depth information, the emission time of the previous frame of test laser light, and the emission time of the next frame of test laser light are used to calculate the depth information of the current scene when the light transmitter 111 emits the next frame of laser light.
  • steps 0101, 0102, 0104, and 0105 in FIG. 10 can refer to the description of steps 031, 032, 034, and 035 in the specification of this application, which will not be repeated here, steps 01031, 01032 and 01033 may be a substep of step 033.
  • the depth camera 11 may acquire depth information in a set mode, that is, may use the default optical power
  • the laser is emitted into the current scene, and there is a time difference between the optical transmitter 111 emitting the test laser and the default optical power, which may cause the user to be less than safe from the depth camera 11 when the laser is emitted with the default optical power. Distance, causing the user to be injured by the laser.
  • the emission time of the test laser in the previous frame is t1
  • the first depth information of the object T in the current scene at time t1 is d1
  • the emission time of the test laser in the next frame is t2, t2
  • the second depth information of the object T is d2.
  • the method of obtaining the first depth information d1 and the second depth information d2 of the current scene respectively can refer to the above description of the processor 12 implementing step 033, which will not be repeated here.
  • the previous frame and the next frame only indicate that the two frames have sequential test lasers, which does not mean that the previous frame and the next frame can only be two adjacent frames.
  • the object T and the terminal 10 are in a relative motion state, for example, the terminal 10 is not moving, and the object T (such as a person or an object) is approaching the terminal 10, or the object T( For example, the person or object being photographed does not move, the user holding the terminal 10 in his hand is approaching the object T (for example, the person or object being photographed), and the relative distance between the object T and the terminal 10 is constantly changing.
  • the first depth information is d1
  • the second depth information is d2
  • the judgment result in step 0104 is yes, it means that the next frame of laser cannot be emitted at t3, and the terminal 10 needs to enter the safe mode.
  • step 066 includes the steps:
  • the third control module 25 includes a first control unit 251, a second control unit 252, a distinguishing unit 253, and a second calculation unit 254.
  • the first control unit 251 can be used to implement step 0131
  • the second control unit 252 can be used to implement step 0132
  • the distinguishing unit 253 can be used to implement step 0133
  • the second calculation unit 254 can be used to implement step 0134.
  • the first control unit 251 can be used to control the light transmitter 111 to emit laser light to the current scene at the first operating frequency;
  • the second control unit 252 can be used to control the light receiver 112 to acquire the collected images at the second operating frequency;
  • the unit 253 can be used to distinguish the first image collected when the light emitter 111 is not emitting laser light and the second image collected when the light emitter 111 emits laser light;
  • the second calculation unit 254 can be used to distinguish the first image collected according to the first image , The second image and the reference image calculate the depth information.
  • the processor 12 may also be used to implement steps 0131, 0132, 0133, and 0134. That is to say, the processor 12 can be used to control the optical transmitter 111 to emit laser light to the current scene at the first operating frequency; to control the optical receiver 112 to acquire the collected images at the second operating frequency, which is greater than the first operating frequency; In the collected images, distinguish the first image collected when the light emitter 111 is not emitting laser light and the second image collected when the light emitter 111 emits laser light; and calculate the depth information based on the first image, the second image, and the reference image .
  • the operating frequencies of the optical receiver 112 and the optical transmitter 111 are different (that is, the second operating frequency is greater than the first operating frequency).
  • the solid line represents the timing of the optical transmitter 111 emitting laser light
  • the dashed line represents the light receiving The timing of acquiring the captured image and the number of frames of the captured image by the device 112.
  • the dotted line represents the frame number of the speckle image formed by only the infrared laser emitted by the light transmitter 111 obtained from the first image and the second image, as shown in FIG. From top to bottom, it is a solid line, a dashed line, and a dot-dash line in sequence, wherein the second operating frequency is twice the first operating frequency.
  • the processor 12 controls the optical receiver 112 to first receive the infrared light in the environment (hereinafter referred to as ambient infrared light) when the optical transmitter 111 is not projecting laser light to obtain the Nth frame of acquisition image ( This is the first image, which can also be called the background image); subsequently, the processor 12 controls the light receiver 112 to receive the ambient infrared light and the infrared laser emitted by the light transmitter 111 when the light transmitter 111 projects laser light to obtain the first image N+1 frames of collected images (the second image at this time, which can also be called interference speckle images); subsequently, the processor 12 controls the light receiver 112 to receive ambient infrared light when the light transmitter 111 does not project laser light to obtain The N+2th frame acquires an image (the first image at this time), and so on, the light receiver 112 acquires the first image and the second image alternately.
  • ambient infrared light the infrared light in the environment
  • the processor 12 controls the light receiver 112 to receive the ambient inf
  • the processor 12 may control the light receiver 112 to first obtain the second image, and then obtain the first image, and alternately execute the acquisition of the collected images according to this sequence.
  • the above-mentioned multiple relationship between the second operating frequency and the first operating frequency is only an example. In other embodiments, the multiple relationship between the second operating frequency and the first operating frequency may also be three times or four times. , Five times, six times and so on.
  • the processor 12 distinguishes each captured image and determines whether the captured image is the first image or the second image. After the processor 12 obtains at least one frame of the first image and at least one frame of the second image, it can calculate the depth information according to the first image, the second image, and the reference image. Specifically, since the first image is collected when the light emitter 111 is not projecting laser light, the light that forms the first image includes only ambient infrared light, and the second image is collected when the light emitter 111 is projecting laser light, forming the first image. The light of the two images includes both the ambient infrared light and the infrared laser emitted by the light emitter 111.
  • the processor 12 can remove the part of the collected image formed by the ambient infrared light in the second image according to the first image, so as to obtain The collected image formed by the infrared laser emitted by the light transmitter 111 (ie, the speckle image formed by the infrared laser).
  • the ambient light includes infrared light with the same wavelength as the laser light emitted by the light transmitter 111 (for example, includes ambient infrared light at 940 nm), and when the light receiver 112 acquires the captured image, this part of the infrared light will also be absorbed by the light receiver. 112 received.
  • the proportion of ambient infrared light in the light received by the light receiver 112 will increase, resulting in inconspicuous laser speckles in the collected image, thereby affecting the calculation of the depth image.
  • step 0133 includes:
  • 01331 Determine the working state of the light emitter 111 at the acquisition time according to the acquisition time of each frame of the acquired image;
  • 01332 Add an image type to each frame of collected images according to the working status
  • 01333 distinguish the first image from the second image according to the image type.
  • step 01331, step 01332, and step 01333 can all be implemented by the distinguishing unit 253.
  • the distinguishing unit 253 can also be used to determine the working status of the light emitter 111 at the time of collection according to the collection time of each frame of the collected image; add the image type to each frame of the collected image according to the working status and distinguish according to the image type The first image and the second image.
  • step 01331, step 01332, and step 01333 may all be implemented by the processor 12.
  • the processor 12 can also be used to determine the working status of the light emitter 111 at the time of collection according to the collection time of each frame of the collected image; add the image type to each frame of the collected image according to the working status and distinguish according to the image type The first image and the second image.
  • the processor 12 will monitor the working status of the optical transmitter 111 in real time via the I2C bus.
  • the processor 12 will first acquire the acquisition time of the acquired image, and then determine according to the acquisition time of the acquired image that the working state of the light transmitter 111 is projection The laser is still not projected, and the image type is added to the captured image based on the judgment result.
  • the collection time of the collected image may be the start time, end time, any time between the start time and the end time when the light receiver 112 obtains each frame of the collected image, and so on. In this way, it is possible to realize the correspondence between each frame of the collected image and the working state (laser projected or not) of the light emitter 111 during the acquisition of the frame of the collected image, and accurately distinguish the type of the collected image.
  • the structure of the image type stream_type is shown in Table 1:
  • stream When stream is 0 in Table 1, it means that the data stream at this time is an image formed by infrared light and/or infrared laser.
  • light When light is 00, it means that the data stream at this time is acquired without any equipment projecting infrared light and/or infrared laser (only ambient infrared light), then the processor 12 can add an image type of 000 to the collected image , To identify this captured image as the first image.
  • light is 01 it means that the data stream at this time is acquired when the light transmitter 111 projects infrared lasers (both ambient infrared light and infrared lasers).
  • the processor 12 may add an image type of 001 to the captured image to identify this captured image as the second image.
  • the processor 12 can then distinguish the image types of the collected images according to stream_type.
  • the processor 12 includes a first storage area, a second storage area, and a logical subtraction circuit, and the logical subtraction circuit is connected to both the first storage area and the second storage area.
  • the first storage area is used to store the first image
  • the second storage area is used to store the second image
  • the logical subtraction circuit is used to process the first image and the second image to obtain a speckle image formed by infrared lasers.
  • the logical subtraction circuit reads the first image from the first storage area, reads the second image from the second storage area, and performs subtraction on the first image and the second image after acquiring the first image and the second image
  • the speckle image formed by infrared laser is processed.
  • the logic subtraction circuit is also connected to the depth calculation module in the processor 12 (for example, it may be an integrated circuit ASIC dedicated to calculating depth, etc.).
  • the logic subtraction circuit sends the speckle image formed by the infrared laser to the depth calculation module.
  • the depth calculation module calculates depth information based on the speckle image formed by the infrared laser and the reference image.
  • this application also provides one or more non-volatile computer-readable storage media 200 containing computer-readable instructions.
  • the processor 300 executes the control method described in any one of the foregoing embodiments.
  • the processor 300 may be the processor 12 in FIGS. 1 and 2.
  • the processor 300 when the computer-readable instructions are executed by the processor 300, the processor 300 is caused to perform the following steps:
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present application, "a plurality of” means at least two, for example, two, three, unless otherwise specifically defined.

Abstract

A control method for a terminal (10). The terminal (10) comprises a depth camera (11) comprising a light emitter (111) and a light receiver (112). The control method comprises: controlling a light emitter (111) to emit a preset number of frames of testing laser light to a current scene; controlling a light receiver (112) to receive the testing laser light reflected by the current scene; acquiring depth information of the current scene according to the received testing laser light; determining whether the depth information contains a depth value less than a preset safe distance; and if so, controlling the terminal (10) to enter a safe mode. Also disclosed are a control device (20) for a terminal (10), a terminal (10), and a computer readable storage medium (200).

Description

终端的控制方法及控制装置、终端及计算机可读存储介质Terminal control method and control device, terminal and computer readable storage medium
优先权信息Priority information
本申请请求2019年05月30日向中国国家知识产权局提交的、专利申请号为201910465376.2的专利申请的优先权和权益,并且通过参照将其全文并入此处。This application requests the priority and rights of the patent application with patent application number 201910465376.2 filed with the State Intellectual Property Office of China on May 30, 2019, and the full text is incorporated herein by reference.
技术领域Technical field
本申请涉及三维成像技术领域,更具体而言,涉及一种终端的控制方法、终端的控制装置、终端及计算机可读存储介质。This application relates to the field of three-dimensional imaging technology, and more specifically, to a terminal control method, a terminal control device, a terminal, and a computer-readable storage medium.
背景技术Background technique
深度相机可通过向场景中投射激光,并接收由场景中的物体反射的激光以进一步获取场景中的物体的深度信息,正常情况,激光的功率和规格是按照安全标准进行设计的,对于人眼和皮肤的安全余量比较充裕。The depth camera can further obtain the depth information of the objects in the scene by projecting laser light into the scene and receiving the laser light reflected by the objects in the scene. Normally, the power and specifications of the laser are designed in accordance with safety standards. And the safety margin of the skin is relatively abundant.
发明内容Summary of the invention
本申请实施方式提供一种终端的控制方法、终端的控制装置、终端及计算机可读存储介质。The embodiments of the present application provide a terminal control method, a terminal control device, a terminal, and a computer-readable storage medium.
本申请实施方式的终端的控制方法用于终端,所述终端包括深度相机,所述深度相机包括光发射器及光接收器,所述控制方法包括:控制所述光发射器向当前场景发射预定帧数的测试激光;控制所述光接收器接收由当前场景反射的测试激光;依据接收到的测试激光获取当前场景的深度信息;判断所述深度信息中是否存在小于预设的安全距离的深度;及若是,控制所述终端进入安全模式。The control method of the terminal of the embodiment of the present application is used for a terminal, the terminal includes a depth camera, the depth camera includes a light transmitter and a light receiver, and the control method includes: controlling the light transmitter to emit a predetermined light to the current scene Frame number of test laser; control the light receiver to receive the test laser reflected by the current scene; obtain the depth information of the current scene according to the received test laser; determine whether there is a depth less than the preset safety distance in the depth information ; And if yes, control the terminal to enter a safe mode.
本申请实施方式的终端的控制装置用于终端,所述终端包括深度相机,所述深度相机包括光发射器及光接收器,所述控制装置包括第一控制模块、第二控制模块、获取模块、第一判断模块及第三控制模块,所述第一控制模块用于控制所述光发射器向当前场景发射预定帧数的测试激光;所述第二控制模块用于控制所述光接收器接收由当前场景反射的测试激光;所述获取模块用于依据接收到的测试激光获取当前场景的深度信息;所述第一判断模块用于判断所述深度信息中是否存在小于预设的安全距离的深度;所述第三控制模块用于若所述深度信息中存在小于预设的安全距离的深度,控制所述终端进入安全模式。The control device of the terminal of the embodiment of this application is used in a terminal, the terminal includes a depth camera, the depth camera includes a light transmitter and a light receiver, and the control device includes a first control module, a second control module, and an acquisition module , A first judgment module and a third control module, the first control module is used to control the light transmitter to emit a predetermined number of test lasers to the current scene; the second control module is used to control the light receiver Receive the test laser reflected by the current scene; the acquisition module is used to acquire the depth information of the current scene according to the received test laser; the first determination module is used to determine whether the depth information is less than a preset safety distance The depth; the third control module is used to control the terminal to enter a safe mode if there is a depth less than a preset safe distance in the depth information.
本申请实施方式的终端包括深度相机及处理器,所述深度相机包括光发射器及光接收器,所述处理器用于:控制所述光发射器向当前场景发射预定帧数的测试激光;控制所述光接收器接收由当前场景反射的测试激光;依据接收到的测试激光获取当前场景的深度信息;判断所述深度信息中是否存在小于预设的安全距离的深度;及若是,控制所述终端进入安全模式。The terminal of the embodiment of the present application includes a depth camera and a processor, the depth camera includes a light transmitter and a light receiver, and the processor is configured to: control the light transmitter to emit a predetermined number of test lasers to the current scene; control The light receiver receives the test laser reflected by the current scene; obtains the depth information of the current scene according to the received test laser; determines whether there is a depth less than a preset safety distance in the depth information; and if so, controls the The terminal enters the safe mode.
本申请实施方式的一个或多个包含计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被处理器执行时,使得所述处理器执行本申请实施方式的控制方法。One or more non-volatile computer-readable storage media containing computer-readable instructions in the embodiment of the present application, which when executed by a processor, cause the processor to execute the control method of the embodiment of the present application .
本申请实施方式的终端的控制方法、终端的控制装置、终端及计算机可读存储介质中,通过控制光发射器发射测试激光,光接收器接收被反射的测试激光,并依据被反射的测试激光先获取深度信息,依据该深度信息判断是否存在小于预设的安全距离的深度,在存在时,判断目前的激光如果照射到用户身上,例如眼睛上,容易对用户造成伤害,并进一步控制终端进入安全模式,以使用户的使用距离较近时,使用终端的安全性也较高。In the terminal control method, the terminal control device, the terminal, and the computer-readable storage medium of the embodiments of the present application, the optical transmitter is controlled to emit the test laser, and the optical receiver receives the reflected test laser, and the reflected test laser First obtain the depth information, and judge whether there is a depth less than the preset safe distance based on the depth information. If it does, it is judged that if the current laser is irradiated on the user's body, such as the eyes, it will easily cause harm to the user, and further control the terminal entry Safe mode, so that when users are closer, the security of using the terminal is also higher.
本申请的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实施方式的实践了解到。The additional aspects and advantages of the embodiments of the present application will be partly given in the following description, and part of them will become obvious from the following description, or be understood through the practice of the embodiments of the present application.
附图说明Description of the drawings
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become obvious and easy to understand from the description of the embodiments in conjunction with the following drawings, in which:
图1是本申请实施方式的终端的结构示意图;FIG. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
图2是本申请实施方式的终端的系统架构示意图;FIG. 2 is a schematic diagram of a system architecture of a terminal according to an embodiment of the present application;
图3是本申请实施方式的终端的控制方法的流程示意图;FIG. 3 is a schematic flowchart of a terminal control method according to an embodiment of the present application;
图4是本申请实施方式的终端的控制装置的模块示意图;4 is a schematic diagram of modules of a terminal control device according to an embodiment of the present application;
图5是本申请实施方式的终端发射的激光的脉冲示意图;FIG. 5 is a schematic diagram of laser pulses emitted by a terminal according to an embodiment of the present application;
图6是本申请实施方式的终端的控制方法的流程示意图;6 is a schematic flowchart of a terminal control method according to an embodiment of the present application;
图7是本申请实施方式的终端的控制方法的流程示意图;FIG. 7 is a schematic flowchart of a terminal control method according to an embodiment of the present application;
图8是本申请实施方式的终端的控制装置的模块示意图;FIG. 8 is a schematic diagram of modules of a terminal control device according to an embodiment of the present application;
图9是本申请实施方式的终端获取的深度信息的场景示意图;FIG. 9 is a schematic diagram of a scene of depth information acquired by a terminal according to an embodiment of the present application;
图10是本申请实施方式的终端的控制方法的流程示意图;FIG. 10 is a schematic flowchart of a terminal control method according to an embodiment of the present application;
图11是本申请实施方式的终端的控制装置的模块示意图;11 is a schematic diagram of modules of a terminal control device according to an embodiment of the present application;
图12是本申请实施方式的终端的控制方法的场景示意图;FIG. 12 is a schematic diagram of a scene of a terminal control method according to an embodiment of the present application;
图13是本申请实施方式的终端的控制方法的流程示意图;FIG. 13 is a schematic flowchart of a terminal control method according to an embodiment of the present application;
图14是本申请实施方式的终端的控制装置的模块示意图;FIG. 14 is a schematic diagram of modules of a terminal control device according to an embodiment of the present application;
图15是本申请实施方式的终端以设定的模式获取深度信息的原理示意图;15 is a schematic diagram of the principle of acquiring depth information in a set mode by a terminal according to an embodiment of the present application;
图16是本申请实施方式的非易失性计算机可读存储介质与处理器的交互示意图。FIG. 16 is a schematic diagram of interaction between a non-volatile computer-readable storage medium and a processor in an embodiment of the present application.
具体实施方式Detailed ways
以下结合附图对本申请的实施方式作进一步说明。附图中相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。The implementation of the present application will be further described below in conjunction with the drawings. The same or similar reference numerals in the drawings indicate the same or similar elements or elements with the same or similar functions throughout.
另外,下面结合附图描述的本申请的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的限制。In addition, the implementation manners of the application described below in conjunction with the drawings are exemplary, and are only used to explain the implementation manners of the application, and cannot be understood as a limitation of the application.
在本申请中,除非另有明确的规定和限定,第一特征在第二特征“上”或“下”可以是第一和第二特征直接接触,或第一和第二特征通过中间媒介间接接触。而且,第一特征在第二特征“之上”、“上方”和“上面”可是第一特征在第二特征正上方或斜上方,或仅仅表示第一特征水平高度高于第二特征。第一特征在第二特征“之下”、“下方”和“下面”可以是第一特征在第二特征正下方或斜下方,或仅仅表示第一特征水平高度小于第二特征。In this application, unless expressly stipulated and defined otherwise, the “on” or “under” of the first feature on the second feature may be in direct contact with the first and second features, or indirectly through an intermediary. contact. Moreover, the "above", "above" and "above" of the first feature on the second feature may mean that the first feature is directly above or obliquely above the second feature, or simply means that the level of the first feature is higher than the second feature. The “below”, “below” and “below” of the second feature of the first feature may mean that the first feature is directly below or obliquely below the second feature, or it simply means that the level of the first feature is smaller than the second feature.
请参阅图1,本申请实施方式的终端10包括壳体15、深度相机11及处理器12。终端10可以是手机、平板电脑、笔记本电脑、智能手表等终端,本申请说明书以终端10是手机为例进行说明,可以理解的是,终端10的具体形式并不限于手机。Please refer to FIG. 1, the terminal 10 of the embodiment of the present application includes a housing 15, a depth camera 11 and a processor 12. The terminal 10 may be a terminal such as a mobile phone, a tablet computer, a notebook computer, a smart watch, etc. The description of this application takes the terminal 10 as a mobile phone as an example for description. It is understood that the specific form of the terminal 10 is not limited to a mobile phone.
深度相机11及处理器12均可以安装在壳体15上。壳体15包括正面151及背面152,正面151与背面152相背。正面151还可用于安装显示屏14,显示屏14可用于显示图像、文字等信息。深度相机11可以安装在正面151,以便于进行自拍或进行视频通话等;深度相机11也可以安装在背面152,以便于拍摄景物及他人;另外,也可以在正面151及背面152均安装有可以独立工作的深度相机11。Both the depth camera 11 and the processor 12 can be installed on the housing 15. The housing 15 includes a front 151 and a back 152, and the front 151 and the back 152 are opposite to each other. The front 151 can also be used to install a display screen 14, which can be used to display images, text and other information. The depth camera 11 can be installed on the front 151 to facilitate selfies or video calls, etc.; the depth camera 11 can also be installed on the back 152 to facilitate shooting scenes and others; in addition, it can also be installed on both the front 151 and the back 152. Independent working depth camera 11.
深度相机11包括光发射器111及光接收器112。深度相机11的光发射器111可以向外发射激光,例如红外激光,激光到达场景中的物体上后被反射,被反射的激光可由光接收器112接收,处理器12可以依据光发射器111发射的激光及光接收器112接收的激光计算物体的深度信息。在一个例子中,深度相机11可通过飞行时间(Time of flight,TOF)测距法获取深度信息,在另一个例子中,深度相机11可通过结构光测距原理获取深度信息。本申请说明书以深度相机11通过结构光测距原理获取深度信息为例进行说明。The depth camera 11 includes a light transmitter 111 and a light receiver 112. The light transmitter 111 of the depth camera 11 can emit laser light, such as infrared laser, which is reflected after reaching the object in the scene. The reflected laser light can be received by the light receiver 112, and the processor 12 can emit according to the light transmitter 111 The laser light and the laser light received by the light receiver 112 calculate the depth information of the object. In one example, the depth camera 11 may obtain depth information through a time of flight (TOF) ranging method. In another example, the depth camera 11 may obtain depth information through a structured light ranging principle. The description of this application takes the depth camera 11 to obtain depth information through the principle of structured light ranging as an example for description.
在图1所示的例子中,深度相机11安装在壳体15的背面152。可以理解,安装在背面152的深度相机11(即后置深度相机11)需要满足拍摄较远物体的正常使用,因此,通常光发射器111需要发射的激光的光功率需要设置得较大,以满足获取深度信息的准确性。然而,后置深度相机11同时还被要求能够拍摄较近的物体或人,当距离较近时,光功率较大的激光容易对人造成伤害。因此,对于后置深度相机11,确保深度相机11使用安全显得尤为重要及有难度。In the example shown in FIG. 1, the depth camera 11 is installed on the back 152 of the housing 15. It can be understood that the depth camera 11 (that is, the rear depth camera 11) installed on the back 152 needs to meet the normal use of photographing distant objects. Therefore, usually the optical power of the laser light emitted by the light emitter 111 needs to be set to be larger. Satisfy the accuracy of obtaining depth information. However, the rear depth camera 11 is also required to be able to photograph close objects or people at the same time. When the distance is close, the laser with higher optical power is likely to cause harm to people. Therefore, for the rear depth camera 11, ensuring the safety of the depth camera 11 is particularly important and difficult.
终端10还可以包括可见光相机13,具体地,可见光相机13可以包括长焦相机及广角相机,或者可见光相机13包括长焦相机、广角相机及潜望式相机。可见光相机13可以与深度相机11靠近设置,例如可见光相机13可以设置在光发射器111与光接收器112之间,以使光发射器111与光接收器112之间具有较远的距离,提高深度相机11的基线(base line)长度,提高获取得深度信息的准确性。The terminal 10 may further include a visible light camera 13. Specifically, the visible light camera 13 may include a telephoto camera and a wide-angle camera, or the visible light camera 13 may include a telephoto camera, a wide-angle camera, and a periscope camera. The visible light camera 13 can be arranged close to the depth camera 11. For example, the visible light camera 13 can be arranged between the light emitter 111 and the light receiver 112, so that the light emitter 111 and the light receiver 112 have a longer distance, which improves The length of the baseline (baseline) of the depth camera 11 improves the accuracy of acquiring depth information.
请结合图2,光发射器111和光接收器112均与处理器12连接。处理器12可以为光发射器111提 供使能信号,具体地,处理器12可以为驱动器16提供使能信号,其中,驱动器16用于驱动光发射器111发射激光。光接收器112通过I2C总线与处理器12连接。光接收器112与光发射器111配合使用时,在一个例子中,光接收器112可以通过选通信号(strobe信号)控制光发射器111的投射时序,其中,strobe信号是根据光接收器112获取采集图像的时序来生成的,strobe信号可视为高低电平交替的电信号,光发射器111根据strobe信号指示的激光投射时序来投射激光。具体地,处理器12可以通过I2C总线发送图像采集指令以启用深度相机11使其工作,光接收器112接收到图像采集指令后,通过strobe信号控制开关器件17,若strobe信号为高电平,则开关器件17向驱动器16发送脉冲信号(pwn),驱动器16根据脉冲信号驱动光发射器111向场景中投射激光,若strobe信号为低电平,则开关器件17停止发送脉冲信号至驱动器16,光发射器111不投射激光;或者,也可以是在strobe信号为低电平时,开关器件17向驱动器16发送脉冲信号,驱动器16根据脉冲信号驱动光发射器111向场景中投射激光,在strobe信号为高电平时,开关器件17停止发送脉冲信号至驱动器16,光发射器111不投射激光。Please refer to FIG. 2, both the optical transmitter 111 and the optical receiver 112 are connected to the processor 12. The processor 12 may provide an enable signal for the optical transmitter 111. Specifically, the processor 12 may provide an enable signal for the driver 16, wherein the driver 16 is used to drive the optical transmitter 111 to emit laser light. The optical receiver 112 is connected to the processor 12 through an I2C bus. When the optical receiver 112 is used in conjunction with the optical transmitter 111, in an example, the optical receiver 112 can control the projection timing of the optical transmitter 111 through a strobe signal (strobe signal), where the strobe signal is based on the optical receiver 112 The strobe signal can be regarded as an electrical signal with alternating high and low levels, and the light transmitter 111 projects laser light according to the laser projection timing indicated by the strobe signal. Specifically, the processor 12 can send an image acquisition instruction through the I2C bus to enable the depth camera 11 to work. After the optical receiver 112 receives the image acquisition instruction, it controls the switching device 17 through the strobe signal. If the strobe signal is high, Then the switch device 17 sends a pulse signal (pwn) to the driver 16, and the driver 16 drives the light emitter 111 to project laser light into the scene according to the pulse signal. If the strobe signal is low, the switch device 17 stops sending the pulse signal to the driver 16. The light emitter 111 does not project laser light; or, when the strobe signal is low, the switching device 17 sends a pulse signal to the driver 16, and the driver 16 drives the light emitter 111 to project laser light into the scene according to the pulse signal. When the level is high, the switching device 17 stops sending pulse signals to the driver 16, and the light emitter 111 does not project laser light.
在另一个例子中,光接收器112与光发射器111配合时可以无需用到strobe信号,此时,处理器12发送图像采集指令至光接收器112并同时发送激光投射指令至驱动器16,光接收器112接收到图像采集指令后开始获取采集图像,驱动器16接收到激光投射指令时驱动光发射器111投射激光。光发射器111投射激光时,激光形成带有斑点的激光图案投射在场景中的物体上。光接收器112采集被物体反射的激光图案得到散斑图像,并通过移动产业处理器接口(Mobile Industry Processor Interface,MIPI)将散斑图像发送给处理器12。光接收器112每发送一帧散斑图像给处理器12,处理器12就接收到一个数据流。处理器12可以根据散斑图像和预存在处理器12中的参考图像进行深度信息的计算。In another example, the strobe signal may not be used when the optical receiver 112 and the optical transmitter 111 cooperate. At this time, the processor 12 sends an image acquisition command to the optical receiver 112 and simultaneously sends a laser projection command to the driver 16. The receiver 112 starts to acquire the acquired image after receiving the image acquisition instruction, and when the driver 16 receives the laser projection instruction, it drives the light transmitter 111 to project laser light. When the light emitter 111 projects laser light, the laser light forms a laser pattern with spots and is projected on an object in the scene. The light receiver 112 collects the laser pattern reflected by the object to obtain the speckle image, and sends the speckle image to the processor 12 through the Mobile Industry Processor Interface (MIPI). Each time the optical receiver 112 sends a speckle image to the processor 12, the processor 12 receives a data stream. The processor 12 may calculate the depth information according to the speckle image and the reference image pre-stored in the processor 12.
请参阅图1至图3,本申请实施方式的控制方法可用于控制上述的终端10,控制方法包括步骤:1 to 3, the control method of the embodiment of the present application can be used to control the aforementioned terminal 10. The control method includes the steps:
031:控制光发射器111向当前场景发射预定帧数的测试激光;031: Control the light emitter 111 to emit a predetermined number of test lasers to the current scene;
032:控制光接收器112接收由当前场景反射的测试激光;032: Control the optical receiver 112 to receive the test laser reflected by the current scene;
033:依据接收到的测试激光获取当前场景的深度信息;033: Obtain the depth information of the current scene according to the received test laser;
034:判断深度信息中是否存在小于预设的安全距离的深度;及034: Determine whether there is a depth less than the preset safety distance in the depth information; and
035:若是,控制终端10进入安全模式。035: If yes, the control terminal 10 enters the safe mode.
请参阅图1至图4,本申请实施方式的控制装置20可用于控制上述的终端10,控制装置20包括第一控制模块21、第二控制模块22、获取模块23、第一判断模块24及第三控制模块25。第一控制模块21可用于实施步骤031;第二控制模块22可用于实施步骤032;获取模块23可用于实施步骤033;第一判断模块24可用于实施步骤034;第三控制模块25可用于实施步骤035。也即是说,第一控制模块21可用于控制光发射器111向当前场景发射预定帧数的测试激光;第二控制模块22可用于控制光接收器112接收由当前场景反射的测试激光;获取模块23可用于依据接收到的测试激光获取当前场景的深度信息;第一判断模块24可用于判断深度信息中是否存在小于预设的安全距离的深度;第三控制模块25可用于若深度信息中存在小于预设的安全距离的深度,控制终端10进入安全模式。1 to 4, the control device 20 of the embodiment of the present application can be used to control the aforementioned terminal 10. The control device 20 includes a first control module 21, a second control module 22, an acquisition module 23, a first judgment module 24, and The third control module 25. The first control module 21 can be used to implement step 031; the second control module 22 can be used to implement step 032; the acquisition module 23 can be used to implement step 033; the first judgment module 24 can be used to implement step 034; the third control module 25 can be used to implement Step 035. In other words, the first control module 21 can be used to control the light transmitter 111 to emit a predetermined number of test lasers to the current scene; the second control module 22 can be used to control the light receiver 112 to receive the test laser reflected by the current scene; The module 23 can be used to obtain the depth information of the current scene according to the received test laser; the first judging module 24 can be used to judge whether there is a depth less than the preset safety distance in the depth information; the third control module 25 can be used to determine the depth information. There is a depth less than the preset safety distance, and the control terminal 10 enters the safety mode.
请参阅图1至图3,本申请实施方式的处理器12可用于实施步骤031、032、033、及034,也即是说,处理器12可用于:控制光发射器111向当前场景发射预定帧数的测试激光;控制光接收器112接收由当前场景反射的测试激光;依据接收到的测试激光获取当前场景的深度信息;判断深度信息中是否存在小于预设的安全距离的深度;及若是,控制终端10进入安全模式。1 to 3, the processor 12 in the embodiment of the present application can be used to implement steps 031, 032, 033, and 034, that is, the processor 12 can be used to: control the light emitter 111 to transmit a preset to the current scene The number of frames of test laser; control the light receiver 112 to receive the test laser reflected by the current scene; obtain the depth information of the current scene according to the received test laser; determine whether there is a depth less than the preset safety distance in the depth information; and if so , The control terminal 10 enters the safe mode.
具体地,处理器12先控制光发射器111向当前场景发射预定帧数的测试激光。预定帧数可以是一帧,对应地,处理器12可以向光发射器111发送一个脉冲控制信号;预定帧数可以是多帧,对应地,处理器12可以向光发射器111发送多个脉冲控制信号。测试激光的光功率可以设定为小于光发射器111正常使用时发射的激光的光功率,具体地,可以通过控制测试激光的幅值较小、控制测试激光的占空比较小等方式实现测试激光的光功率较小。Specifically, the processor 12 first controls the light emitter 111 to emit a predetermined number of test lasers to the current scene. The predetermined number of frames may be one frame, and correspondingly, the processor 12 may send a pulse control signal to the optical transmitter 111; the predetermined number of frames may be multiple frames, and correspondingly, the processor 12 may send multiple pulses to the optical transmitter 111 control signal. The optical power of the test laser can be set to be less than the optical power of the laser emitted by the optical transmitter 111 in normal use. Specifically, the test can be achieved by controlling the amplitude of the test laser to be small, and the duty ratio of the test laser to be small, etc. The optical power of the laser is small.
处理器12控制光接收器112接收由当前场景反射的测试激光。处理器12可以控制光发射器111与光接收器112同时开启,即,处理器12可以同时实施步骤031及032。在本申请实施例中,光发射器111发射的激光带有特定的图案(例如散斑图案),激光经过物体反射后由光接收器112接收,光接收器112采集到被物体反射后的激光后,形成散斑图像。The processor 12 controls the light receiver 112 to receive the test laser light reflected by the current scene. The processor 12 can control the optical transmitter 111 and the optical receiver 112 to be turned on at the same time, that is, the processor 12 can implement steps 031 and 032 at the same time. In the embodiment of the present application, the laser light emitted by the light transmitter 111 has a specific pattern (such as a speckle pattern). The laser light is reflected by the object and then received by the light receiver 112, and the light receiver 112 collects the laser light reflected by the object. After that, a speckle image is formed.
然后,处理器12依据接收到的测试激光获取当前场景的深度信息。具体地,终端10的存储器内可以存储有预先标定好的参考图像,处理器12处理上述的散斑图像及参考图像以得到当前场景的深度图 像,其中,深度图像中包含深度信息。在一个例子中,深度图像包括多个像素,每个像素的像素值为与该像素对应的当前场景的深度,例如,某像素的像素值为20,该某像素与场景中的A点对应,则该像素值20为深度相机11到A点的距离为20,可以理解,像素值越小,则当前场景的对应位置与深度相机11的距离越小。Then, the processor 12 obtains the depth information of the current scene according to the received test laser. Specifically, a pre-calibrated reference image may be stored in the memory of the terminal 10, and the processor 12 processes the aforementioned speckle image and reference image to obtain a depth image of the current scene, where the depth image contains depth information. In an example, the depth image includes multiple pixels, and the pixel value of each pixel is the depth of the current scene corresponding to the pixel. For example, the pixel value of a certain pixel is 20, and the certain pixel corresponds to point A in the scene. Then the pixel value 20 means that the distance from the depth camera 11 to the point A is 20. It can be understood that the smaller the pixel value, the smaller the distance between the corresponding position of the current scene and the depth camera 11.
然后,处理器12判断深度信息中是否存在小于预设的安全距离的深度。安全距离可以是依据相关安全标准及用户属性等进行设定,例如依据用户人眼单位时间能够承受的激光能量的最大值、依据终端10的目标使用人群、依据终端10的目标使用场景等进行设定。安全距离可以设定为100毫米、200毫米、250毫米、1000毫米等任意距离,在此不作限制。如上所述,深度信息可以包括了当前场景中多个位置的深度,处理器12可以将每个位置的深度与安全距离进行比较,当至少有一个位置的深度小于安全距离时,则判断该位置的物体(例如人)较容易受到激光伤害。Then, the processor 12 determines whether there is a depth less than a preset safety distance in the depth information. The safety distance can be set according to relevant safety standards and user attributes, for example, according to the maximum laser energy that the user's eyes can withstand per unit time, according to the target user population of the terminal 10, and according to the target usage scenario of the terminal 10, etc. set. The safety distance can be set to any distance such as 100 mm, 200 mm, 250 mm, 1000 mm, etc., and there is no restriction here. As described above, the depth information may include the depths of multiple locations in the current scene, and the processor 12 may compare the depth of each location with the safety distance, and when the depth of at least one location is less than the safety distance, the position is determined Objects (such as people) are more susceptible to laser damage.
然后,处理器12在深度信息中存在小于预设的安全距离的深度时,控制终端10进入安全模式。如上所述,在判断有物体较容易受到激光伤害时,控制终端10进入安全模式,以保证当前场景中的物体不会受到伤害。具体地,处理器12控制终端10进入安全模式,可以是控制终端10发出提示信号(例如,控制显示屏14显示提示用户远离的提示窗口,控制终端10的扬声器发出提示用户远离的提示语音,控制终端10的震动马达发出提示用户远离的震动等方式),可以是控制光发射器111以预设的安全频率发射激光,可以是控制光发射器111以预设的安全幅值发射激光中的一种或多种。Then, the processor 12 controls the terminal 10 to enter the safety mode when there is a depth less than the preset safety distance in the depth information. As described above, when it is determined that an object is more likely to be injured by the laser, the control terminal 10 enters the safe mode to ensure that the object in the current scene will not be injured. Specifically, the processor 12 controls the terminal 10 to enter the safe mode, which may be to control the terminal 10 to send a prompt signal (for example, control the display screen 14 to display a prompt window prompting the user to stay away, control the speaker of the terminal 10 to send out a prompt voice prompting the user to stay away, and control The vibration motor of the terminal 10 sends out a vibration that prompts the user to stay away, etc.), which may be controlling the light emitter 111 to emit laser at a preset safe frequency, or controlling the light emitter 111 to emit one of the lasers at a preset safe amplitude. Kind or more.
请结合图5,处理器12默认控制光发射器111发射的激光的波形如L1所示,高电平表征光发射器111正在发射激光,低电平表征光发射器111未正在发射激光。如图5中的L2为处理器12控制光发射器111以预设的安全频率发射的激光的波形,其中,安全频率可以小于光发射器111发射激光的默认频率,例如安全频率为默认频率的1/2、1/3等,以使用户在单位时间内受到激光照射的能量较低,避免伤害用户。如图5中的L3为处理器12控制光发射器111以预设的安全幅值发射的激光的波形,其中,安全幅值可以小于光发射器111发射激光的默认幅值,例如安全幅值为默认幅值的2/3、1/2、1/3等。如图5中的L4为处理器12控制光发射器111以安全频率及安全幅值发射的激光的波形。可以理解,在改变激光的波形后,深度相机11依然可以用于获取场景中的深度图像,对用户的使用体验影响较小。5, the processor 12 controls the waveform of the laser light emitted by the optical transmitter 111 by default as shown in L1, a high level indicates that the optical transmitter 111 is emitting laser light, and a low level indicates that the optical transmitter 111 is not emitting laser light. As shown in Figure 5, L2 is the waveform of the laser light emitted by the optical transmitter 111 controlled by the processor 12 at a preset safe frequency. The safe frequency may be smaller than the default frequency of the laser emitted by the optical transmitter 111, for example, the safe frequency is the default frequency. 1/2, 1/3, etc., so that the user's energy per unit time by laser irradiation is low and avoid harm to the user. L3 in Fig. 5 is the waveform of the laser light emitted by the processor 12 controlling the optical transmitter 111 at a preset safe amplitude, where the safe amplitude may be smaller than the default amplitude of the laser emitted by the optical transmitter 111, for example, the safe amplitude It is 2/3, 1/2, 1/3, etc. of the default amplitude. L4 in FIG. 5 is the waveform of the laser light emitted by the processor 12 controlling the optical transmitter 111 at a safe frequency and a safe amplitude. It can be understood that after changing the waveform of the laser, the depth camera 11 can still be used to obtain a depth image in the scene, which has little impact on the user experience.
在相关的现有技术中,在用户与深度相机距离较近时,深度相机发出的激光照射到用户身上的能量过高,容易对用户造成伤害,深度相机的使用安全性较低。综上,本申请实施方式的终端10、控制方法及控制装置20中,通过控制光发射器111发射测试激光,光接收器112接收被反射的测试激光,并依据被反射的测试激光先获取深度信息,依据深度信息判断是否存在小于预设的安全距离的深度,在存在时,判断目前的激光如果照射到用户身上,例如眼睛上,容易对用户造成伤害,并进一步控制终端10进入安全模式,以使用户的使用距离较近时,使用终端10的安全性也较高。同时,由于通过深度相机11预先检测用户的使用距离,不需要额外增加深度相机11之外的距离检测装置进行预先检测,降低了终端10的尺寸及制造成本。In the related prior art, when the user is close to the depth camera, the laser light emitted by the depth camera irradiates the user with too much energy, which is likely to cause harm to the user, and the use safety of the depth camera is low. In summary, in the terminal 10, the control method, and the control device 20 of the embodiment of the present application, the optical transmitter 111 is controlled to emit the test laser, and the optical receiver 112 receives the reflected test laser, and first obtains the depth according to the reflected test laser. According to the depth information, it is judged whether there is a depth less than the preset safety distance. If it exists, it is judged that if the current laser is irradiated on the user's body, such as the eyes, it is likely to cause harm to the user, and the terminal 10 is further controlled to enter the safe mode. So that when the user's use distance is relatively short, the security of using the terminal 10 is also high. At the same time, since the use distance of the user is pre-detected by the depth camera 11, there is no need to add a distance detection device other than the depth camera 11 for pre-detection, which reduces the size and manufacturing cost of the terminal 10.
请参阅图6,在某些实施方式中,控制方法还包括步骤:Referring to FIG. 6, in some embodiments, the control method further includes steps:
066:若深度信息中不存在小于预设的安全距离的深度,控制终端10以设定的模式获取当前场景的深度信息;及066: If there is no depth less than the preset safety distance in the depth information, the control terminal 10 obtains the depth information of the current scene in the set mode; and
067:判断以设定的模式获取的深度信息中是否存在小于预设的安全距离的深度。067: Determine whether there is a depth less than the preset safety distance in the depth information acquired in the set mode.
通过步骤067判断以设定的模式获取的深度信息中存在小于预设的安全距离的深度时,还可以实施步骤065:控制终端10进入安全模式。When it is determined in step 067 that there is a depth less than the preset safety distance in the depth information acquired in the set mode, step 065 can also be implemented: controlling the terminal 10 to enter the safe mode.
请参阅图4及图6,在某些实施方式中,第三控制模块25还可用于实施步骤066,第一判断模24块还可用于实施步骤067。也即是说,第三控制模块25还可用于若深度信息中不存在小于预设的安全距离的深度,控制终端10以设定的模式获取当前场景的深度信息;第一判断模块24还可用于判断以设定的模式获取的深度信息中是否存在小于预设的安全距离的深度。第三控制模块25还可用于在判断以设定的模式获取的深度信息中存在小于预设的安全距离的深度时,实施步骤065,即,控制终端10进入安全模式。4 and 6, in some embodiments, the third control module 25 can also be used to implement step 066, and the first judgment module 24 can also be used to implement step 067. In other words, the third control module 25 can also be used to control the terminal 10 to obtain the depth information of the current scene in a set mode if there is no depth less than the preset safety distance in the depth information; the first judgment module 24 can also be used To determine whether there is a depth less than the preset safety distance in the depth information acquired in the set mode. The third control module 25 can also be used to implement step 065 when determining that the depth information acquired in the set mode has a depth less than the preset safety distance, that is, control the terminal 10 to enter the safety mode.
请参阅图1及图6,在某些实施方式中,处理器12还可用于实施步骤066及067,也即是说,处理器12还可用于若深度信息中不存在小于预设的安全距离的深度,控制终端10以设定的模式获取当前场景的深度信息;及判断以设定的模式获取的深度信息中是否存在小于预设的安全距离的深度。处理器12 还用于在判断以设定的模式获取的深度信息中存在小于预设的安全距离的深度时,实施步骤065,即,控制终端10进入安全模式。1 and 6, in some embodiments, the processor 12 can also be used to implement steps 066 and 067, that is to say, the processor 12 can also be used if the depth information does not have a safety distance smaller than the preset safety distance. The control terminal 10 obtains the depth information of the current scene in the set mode; and determines whether there is a depth less than the preset safety distance in the depth information obtained in the set mode. The processor 12 is further configured to implement step 065 when determining that there is a depth less than the preset safety distance in the depth information acquired in the set mode, that is, control the terminal 10 to enter the safety mode.
其中,图6中的步骤061、062、063、064及065的内容及具体实施细节,可以参照本申请说明书中对步骤031、032、033、034、及035的描述,在此不再赘述。For the content and specific implementation details of steps 061, 062, 063, 064, and 065 in FIG. 6, please refer to the description of steps 031, 032, 033, 034, and 035 in the specification of this application, which will not be repeated here.
具体地,在深度信息中不存在小于预设的安全距离的深度时,可以判断当前场景中的物体未过于接近深度相机11,以设定的模式获取深度信息也不会对用户造成伤害,因此处理器12可以控制深度相机11以设定的模式获取当前场景的深度信息。具体地,设定的模式可以是终端10默认的深度相机11的工作模式,设定的模式包括了光发射器111发射激光的设定波形等信息,如图5所示的L1波形。Specifically, when there is no depth less than the preset safety distance in the depth information, it can be determined that the object in the current scene is not too close to the depth camera 11, and acquiring the depth information in the set mode will not cause harm to the user. The processor 12 may control the depth camera 11 to acquire the depth information of the current scene in a set mode. Specifically, the set mode may be the default working mode of the depth camera 11 of the terminal 10, and the set mode includes information such as the set waveform of the laser emitted by the light transmitter 111, such as the L1 waveform shown in FIG. 5.
处理器12判断以设定的模式获取的深度信息中是否存在小于预设的安全距离的深度。可以结合上述对处理器12实施步骤034的具体说明,以设定的模式获取的深度信息包括场景中每个位置的深度,而深度可以通过深度图像中像素的像素值体现出来。通过比较每个深度与安全距离,当存在小于安全距离的深度时,判断场景中虽然初始时刻不存在过于接近深度相机11的物体,但在使用过程中,场景中又存在过于接近深度相机11的物体了,此时同样需要确保用户的安全,因此,可以控制终端10处于上述的安全模式。The processor 12 determines whether there is a depth less than a preset safety distance in the depth information acquired in the set mode. In combination with the above specific description of the processor 12 implementing step 034, the depth information acquired in the set mode includes the depth of each position in the scene, and the depth can be reflected by the pixel value of the pixel in the depth image. By comparing each depth with the safety distance, when there is a depth less than the safety distance, it is judged that although there is no object too close to the depth camera 11 at the initial moment in the scene, there are objects too close to the depth camera 11 in the scene during use. It is necessary to ensure the safety of the user at this time. Therefore, the terminal 10 can be controlled to be in the aforementioned security mode.
进一步地,如果以设定的模式获取的深度信息中不存在小于预设的安全距离的深度,则处理器12可以继续控制终端10以设定的模式获取当前场景的深度信息。Further, if there is no depth less than the preset safety distance in the depth information obtained in the set mode, the processor 12 may continue to control the terminal 10 to obtain the depth information of the current scene in the set mode.
请参阅图7,在某些实施方式中,控制方法还包括步骤076:依据深度信息判断当前场景是否存在人眼。在判断当前场景中存在人眼时,实施步骤074。Referring to FIG. 7, in some embodiments, the control method further includes step 076: judging whether there are human eyes in the current scene according to the depth information. When it is judged that there are human eyes in the current scene, step 074 is implemented.
请参阅图7及图8,在某些实施方式中,控制装置20还包括第二判断模块26,第二判断模块26可用于实施步骤076,也即是说,第二判断模块26可用于依据深度信息判断当前场景是否存在人眼。在判断当前场景中存在人眼时,第一判断模块24实施步骤074。Referring to FIGS. 7 and 8, in some embodiments, the control device 20 further includes a second judgment module 26, which can be used to implement step 076, that is, the second judgment module 26 can be used to The depth information determines whether there are human eyes in the current scene. When judging that there are human eyes in the current scene, the first judging module 24 implements step 074.
请参阅图1及图7,在某些实施方式中,处理器12还可用于实施步骤076,即,处理器12还可用于依据深度信息判断当前场景是否存在人眼。在判断当前场景中存在人眼时,处理器12实施步骤074。1 and FIG. 7, in some embodiments, the processor 12 may also be used to implement step 076, that is, the processor 12 may also be used to determine whether there are human eyes in the current scene according to the depth information. When judging that there are human eyes in the current scene, the processor 12 implements step 074.
其中,图7中的步骤071、072、073、074及075的内容及具体实施细节,可以参照本申请说明书中对步骤031、032、033、034及035的描述,在此不再赘述。For the content and specific implementation details of steps 071, 072, 073, 074, and 075 in FIG. 7, please refer to the description of steps 031, 032, 033, 034, and 035 in the specification of this application, which will not be repeated here.
具体地,由于人眼对于激光的耐受能力明显低于人体外表其余部位的皮肤,对人造成伤害往往会先伤害到人眼,故可以先判断当前场景中是否存在人眼,在存在人眼时,判断当前的使用距离是否小于安全距离。在一个例子中,如果判断不存在人眼,处理器12可以直接实施步骤076,以提高获取深度信息的时效性。Specifically, since the human eye’s ability to tolerate laser light is significantly lower than that of the skin on the rest of the human body, the human eye is often hurt first when the human eye is injured. Therefore, it can be judged whether there is a human eye in the current scene. When, judge whether the current use distance is less than the safety distance. In an example, if it is determined that there are no human eyes, the processor 12 may directly implement step 076 to improve the timeliness of obtaining depth information.
诚如上述,深度信息可以由深度图像中的多个像素的像素值来表征,处理器12可以依据多个像素的像素值的分布状况与预设的人眼模型进行匹配,如深度图像中存在匹配度超过预定阈值的区域,则判断当前场景中存在人眼,如深度图像中不存在匹配度超过预定阈值的区域,则判断当前场景中不存在人眼。As mentioned above, the depth information can be characterized by the pixel values of multiple pixels in the depth image, and the processor 12 can match the preset human eye model according to the distribution of the pixel values of the multiple pixels, such as the presence in the depth image. If the matching degree exceeds the predetermined threshold, it is determined that there are human eyes in the current scene. If there is no area in the depth image with the matching degree exceeding the predetermined threshold, then it is determined that there are no human eyes in the current scene.
请结合图9,深度图像I包括多个像素P,每个像素P的像素值(如21、22、23、24)表征该像素P的对应位置的深度。如深度图像I的区域D中,依据区域D中像素值的分布情况,判断该区域D对应的物体的深度分布大致为中间条状区域深度较小,而该条状区域的周围深度均逐渐增大,该深度分布情况与正视深度相机11的人眼模型匹配度较高,因此判断当前场景中存在人眼,该区域D对应当前场景中人眼的位置。Please refer to FIG. 9, the depth image I includes multiple pixels P, and the pixel value of each pixel P (such as 21, 22, 23, 24) represents the depth of the corresponding position of the pixel P. For example, in the area D of the depth image I, according to the distribution of pixel values in the area D, it is judged that the depth distribution of the object corresponding to the area D is roughly that the depth of the middle strip area is smaller, and the depth around the strip area gradually increases. Large, the depth distribution has a high degree of matching with the human eye model of the orthoscopic depth camera 11, so it is judged that there are human eyes in the current scene, and this area D corresponds to the position of the human eye in the current scene.
当然,在其他实施例中,处理器12还可利用可见光相机13获取的当前场景的可见光图像共同确认当前场景是否存在人眼,具体为同时通过识别可见光图像中的特征信息判断当前场景是否存在人眼,当通过可见光图像及深度信息均识别存在人眼时,判断当前场景存在活体人眼,而排除仅存在人眼照片或仅存在人眼模具等的情况。Of course, in other embodiments, the processor 12 can also use the visible light image of the current scene acquired by the visible light camera 13 to jointly confirm whether there are human eyes in the current scene. Specifically, at the same time, it can judge whether there are people in the current scene by identifying the characteristic information in the visible light image. Eyes, when the presence of human eyes is recognized through both visible light images and depth information, it is determined that there are living human eyes in the current scene, and situations where there are only human eye photos or only human eye molds are excluded.
请参阅图10,在某些实施方式中,预定帧数包括至少两帧,控制方法还包括步骤:Referring to FIG. 10, in some embodiments, the predetermined number of frames includes at least two frames, and the control method further includes the steps:
01031:依据接收到的前一帧测试激光获取当前场景的第一深度信息;01031: Obtain the first depth information of the current scene according to the received test laser of the previous frame;
01032:依据接收到的后一帧测试激光获取当前场景的第二深度信息;及01032: Obtain the second depth information of the current scene according to the received test laser of the next frame; and
01033:依据第一深度信息、第二深度信息、前一帧测试激光的发射时间及后一帧测试激光的发射时间计算光发射器111发射下一帧激光时当前场景的深度信息。01033: Calculate the depth information of the current scene when the light emitter 111 emits the next frame of laser light according to the first depth information, the second depth information, the emission time of the previous frame of test laser light and the emission time of the next frame of test laser light.
请参阅图10及图11,在某些实施方式中,预定帧数包括至少两帧,获取模块23包括第一获取单元231、第二获取单元232及第一计算单元233。第一获取单元231可用于实施步骤01031,第二获取单元232可用于实施步骤01032,第一计算单元233可用于实施步骤01033。也即是说,第一获取单元231可用于依据接收到的前一帧测试激光获取当前场景的第一深度信息;第二获取单元232可用于依据接收到的后一帧测试激光获取当前场景的第二深度信息;第一计算单元233可用于依据第一深度信息、第二深度信息、前一帧测试激光的发射时间及后一帧测试激光的发射时间计算光发射器111发射下一帧激光时当前场景的深度信息。Referring to FIG. 10 and FIG. 11, in some embodiments, the predetermined number of frames includes at least two frames, and the acquisition module 23 includes a first acquisition unit 231, a second acquisition unit 232, and a first calculation unit 233. The first obtaining unit 231 can be used to implement step 01033, the second obtaining unit 232 can be used to implement step 01032, and the first calculating unit 233 can be used to implement step 01033. In other words, the first acquisition unit 231 can be used to acquire the first depth information of the current scene according to the received test laser of the previous frame; the second acquisition unit 232 can be used to acquire the current scene information according to the received test laser of the next frame Second depth information; the first calculation unit 233 can be used to calculate the light emitter 111 to emit the next frame of laser light according to the first depth information, the second depth information, the emission time of the previous frame of test laser and the emission time of the next frame of test laser Time depth information of the current scene.
请参阅图1及图10,在某些实施方式中,预定帧数包括至少两帧,处理器12还可用于实施步骤01031、01032及01033。也即是说,处理器12可用于依据接收到的前一帧测试激光获取当前场景的第一深度信息;依据接收到的后一帧测试激光获取当前场景的第二深度信息;及依据第一深度信息、第二深度信息、前一帧测试激光的发射时间及后一帧测试激光的发射时间计算光发射器111发射下一帧激光时当前场景的深度信息。1 and 10, in some embodiments, the predetermined number of frames includes at least two frames, and the processor 12 may also be used to implement steps 01033, 01033, and 01033. In other words, the processor 12 may be used to obtain the first depth information of the current scene according to the received test laser of the previous frame; obtain the second depth information of the current scene according to the received test laser of the next frame; and The depth information, the second depth information, the emission time of the previous frame of test laser light, and the emission time of the next frame of test laser light are used to calculate the depth information of the current scene when the light transmitter 111 emits the next frame of laser light.
其中,图10中的步骤0101、0102、0104及0105的内容及具体实施细节,可以参照本申请说明书中对步骤031、032、034及035的描述,在此不再赘述,步骤01031、01032及01033可以是步骤033的子步骤。Among them, the content and specific implementation details of steps 0101, 0102, 0104, and 0105 in FIG. 10 can refer to the description of steps 031, 032, 034, and 035 in the specification of this application, which will not be repeated here, steps 01031, 01032 and 01033 may be a substep of step 033.
可以理解,在光发射器111发射测试激光时,用户与深度相机11的距离可能大于安全距离,基于该判断,深度相机11可能会以设定的模式获取深度信息,即可能以默认的光功率向当前场景中发射激光,而光发射器111发射测试激光与以默认的光功率发射激光之间存在时间差,可能导致在以默认的光功率发射激光的时刻,用户与深度相机11的距离小于安全距离,而导致用户受到激光的伤害。It can be understood that when the optical transmitter 111 emits the test laser, the distance between the user and the depth camera 11 may be greater than the safe distance. Based on this judgment, the depth camera 11 may acquire depth information in a set mode, that is, may use the default optical power The laser is emitted into the current scene, and there is a time difference between the optical transmitter 111 emitting the test laser and the default optical power, which may cause the user to be less than safe from the depth camera 11 when the laser is emitted with the default optical power. Distance, causing the user to be injured by the laser.
请结合图12,在本实施方式中,前一帧测试激光的发射时间为t1,t1时刻当前场景的物体T的第一深度信息为d1,后一帧测试激光的发射时间为t2,t2时刻物体T的第二深度信息为d2。依据接收到的前一帧及后一帧测试激光,分别获取当前场景的第一深度信息d1及第二深度信息d2的方式可以参考上述对处理器12实施步骤033的描述,在此不再赘述。其中,前一帧及后一帧仅表示两帧具有先后顺序的测试激光,并不意味着前一帧和后一帧只能是相邻的两帧。12, in this embodiment, the emission time of the test laser in the previous frame is t1, the first depth information of the object T in the current scene at time t1 is d1, and the emission time of the test laser in the next frame is t2, t2 The second depth information of the object T is d2. According to the received test laser of the previous frame and the next frame, the method of obtaining the first depth information d1 and the second depth information d2 of the current scene respectively can refer to the above description of the processor 12 implementing step 033, which will not be repeated here. . Among them, the previous frame and the next frame only indicate that the two frames have sequential test lasers, which does not mean that the previous frame and the next frame can only be two adjacent frames.
从图12中可以看出,发射测试激光时,物体T与终端10处在相对运动的状态,例如终端10不动,而物体T(例如人或物)正在向终端10靠近,或者物体T(例如,被拍摄的人或物)不动,用户手拿终端10正在向该物体T(例如,被拍摄的人或物)靠近,物体T与终端10的相对距离不断发生改变。而通过第一深度信息为d1、第二深度信息为d2、前一帧测试激光的发射时间t1及后一帧测试激光的发射时间t2可以计算出物体T与终端10的相对运动状态,例如通过d1-d2=k(t2-t1),得出运动系数k。It can be seen from FIG. 12 that when the test laser is emitted, the object T and the terminal 10 are in a relative motion state, for example, the terminal 10 is not moving, and the object T (such as a person or an object) is approaching the terminal 10, or the object T( For example, the person or object being photographed does not move, the user holding the terminal 10 in his hand is approaching the object T (for example, the person or object being photographed), and the relative distance between the object T and the terminal 10 is constantly changing. The first depth information is d1, the second depth information is d2, the emission time t1 of the test laser in the previous frame and the emission time t2 of the test laser in the next frame can calculate the relative motion state of the object T and the terminal 10, for example, d1-d2=k(t2-t1), the motion coefficient k is obtained.
处理器12进而依据光发射器111发射(实际尚未发射)下一帧激光(该激光的波形可以不同于测试激光的波形)的时间t3及上述的相对运动状态,计算在t3时刻物体T的深度信息d3,其中,d3-d2=k(t3-t2),或者d3-d1=k(t3-t1),并将深度信息d3作为步骤0104中的深度信息以用于判断深度信息中是否存在小于预设的安全距离的深度。当步骤0104判断结果为是时,说明不能在t3发射下一帧激光,终端10需要进入安全模式。The processor 12 further calculates the depth of the object T at time t3 according to the time t3 when the light emitter 111 emits (not actually emitted) the next frame of laser (the waveform of the laser may be different from the waveform of the test laser) and the above-mentioned relative motion state Information d3, where d3-d2=k(t3-t2), or d3-d1=k(t3-t1), and the depth information d3 is used as the depth information in step 0104 to determine whether there is less than The depth of the preset safety distance. When the judgment result in step 0104 is yes, it means that the next frame of laser cannot be emitted at t3, and the terminal 10 needs to enter the safe mode.
请参阅图13,在某些实施方式中,步骤066包括步骤:Referring to Figure 13, in some embodiments, step 066 includes the steps:
0131:控制光发射器111以第一工作频率向当前场景发射激光;0131: Control the light transmitter 111 to emit laser light to the current scene at the first operating frequency;
0132:控制光接收器112以第二工作频率获取采集图像,第二工作频率大于第一工作频率;0132: Control the optical receiver 112 to acquire the collected image at the second operating frequency, which is greater than the first operating frequency;
0133:在采集图像中区分出在光发射器111未发射激光时采集的第一图像及在光发射器111发射激光时采集的第二图像;和0133: distinguish the first image collected when the light emitter 111 is not emitting laser light and the second image collected when the light emitter 111 is emitting laser light in the collected images; and
0134:根据第一图像、第二图像及参考图像计算深度信息。0134: Calculate depth information based on the first image, the second image, and the reference image.
请参阅图13及图14,在某些实施方式中,第三控制模块25包括第一控制单元251、第二控制单元252、区分单元253及第二计算单元254。第一控制单元251可用于实施步骤0131,第二控制单元252可用于实施步骤0132,区分单元253可用于实施步骤0133,第二计算单元254可用于实施步骤0134。也即是说,第一控制单元251可用于控制光发射器111以第一工作频率向当前场景发射激光;第二控制单元252可用于控制光接收器112以第二工作频率获取采集图像;区分单元253可用于在采集图像中区分出在光发射器111未发射激光时采集的第一图像及在光发射器111发射激光时采集的第二图像;第二计算单元254可用于根据第一图像、第二图像及参考图像计算深度信息。Referring to FIGS. 13 and 14, in some embodiments, the third control module 25 includes a first control unit 251, a second control unit 252, a distinguishing unit 253, and a second calculation unit 254. The first control unit 251 can be used to implement step 0131, the second control unit 252 can be used to implement step 0132, the distinguishing unit 253 can be used to implement step 0133, and the second calculation unit 254 can be used to implement step 0134. In other words, the first control unit 251 can be used to control the light transmitter 111 to emit laser light to the current scene at the first operating frequency; the second control unit 252 can be used to control the light receiver 112 to acquire the collected images at the second operating frequency; The unit 253 can be used to distinguish the first image collected when the light emitter 111 is not emitting laser light and the second image collected when the light emitter 111 emits laser light; the second calculation unit 254 can be used to distinguish the first image collected according to the first image , The second image and the reference image calculate the depth information.
请参阅图1及图13,在某些实施方式中,处理器12还可用于实施步骤0131、0132、0133及0134。也即是说,处理器12可用于控制光发射器111以第一工作频率向当前场景发射激光;控制光接收器112以第二工作频率获取采集图像,第二工作频率大于第一工作频率;在采集图像中区分出在光发射器111未发射激光时采集的第一图像及在光发射器111发射激光时采集的第二图像;和根据第一图像、第二图像及参考图像计算深度信息。Referring to FIG. 1 and FIG. 13, in some embodiments, the processor 12 may also be used to implement steps 0131, 0132, 0133, and 0134. That is to say, the processor 12 can be used to control the optical transmitter 111 to emit laser light to the current scene at the first operating frequency; to control the optical receiver 112 to acquire the collected images at the second operating frequency, which is greater than the first operating frequency; In the collected images, distinguish the first image collected when the light emitter 111 is not emitting laser light and the second image collected when the light emitter 111 emits laser light; and calculate the depth information based on the first image, the second image, and the reference image .
具体地,光接收器112与光发射器111工作频率不同(即第二工作频率大于第一工作频率),例如图15所示,实线表示光发射器111发射激光的时序,虚线表示光接收器112获取采集图像的时序及采集图像的帧数,点划线表示根据第一图像和第二图像得到的仅由光发射器111发射的红外激光形成的散斑图像的帧数,图15中由上至下,依次为实线、虚线及点划线,其中,第二工作频率为第一工作频率的两倍。请参阅图15中实线与虚线部分,处理器12控制光接收器112在光发射器111未投射激光时先接收环境中的红外光(下称环境红外光)以获取第N帧采集图像(此时为第一图像,也可称作背景图像);随后,处理器12控制光接收器112在光发射器111投射激光时接收环境红外光以及由光发射器111发射的红外激光以获取第N+1帧采集图像(此时为第二图像,也可称作干扰散斑图像);随后,处理器12再控制光接收器112在光发射器111未投射激光时接收环境红外光以获取第N+2帧采集图像(此时为第一图像),依此类推,光接收器112交替地获取第一图像和第二图像。Specifically, the operating frequencies of the optical receiver 112 and the optical transmitter 111 are different (that is, the second operating frequency is greater than the first operating frequency). For example, as shown in FIG. 15, the solid line represents the timing of the optical transmitter 111 emitting laser light, and the dashed line represents the light receiving The timing of acquiring the captured image and the number of frames of the captured image by the device 112. The dotted line represents the frame number of the speckle image formed by only the infrared laser emitted by the light transmitter 111 obtained from the first image and the second image, as shown in FIG. From top to bottom, it is a solid line, a dashed line, and a dot-dash line in sequence, wherein the second operating frequency is twice the first operating frequency. Please refer to the solid and dashed parts in FIG. 15, the processor 12 controls the optical receiver 112 to first receive the infrared light in the environment (hereinafter referred to as ambient infrared light) when the optical transmitter 111 is not projecting laser light to obtain the Nth frame of acquisition image ( This is the first image, which can also be called the background image); subsequently, the processor 12 controls the light receiver 112 to receive the ambient infrared light and the infrared laser emitted by the light transmitter 111 when the light transmitter 111 projects laser light to obtain the first image N+1 frames of collected images (the second image at this time, which can also be called interference speckle images); subsequently, the processor 12 controls the light receiver 112 to receive ambient infrared light when the light transmitter 111 does not project laser light to obtain The N+2th frame acquires an image (the first image at this time), and so on, the light receiver 112 acquires the first image and the second image alternately.
需要说明的是,处理器12可以控制光接收器112先获取第二图像,再获取第一图像,并根据这个顺序交替执行采集图像的获取。另外,上述的第二工作频率与第一工作频率之间的倍数关系仅为示例,在其他实施例中,第二工作频率与第一工作频率之间的倍数关系还可以是三倍、四倍、五倍、六倍等等。It should be noted that the processor 12 may control the light receiver 112 to first obtain the second image, and then obtain the first image, and alternately execute the acquisition of the collected images according to this sequence. In addition, the above-mentioned multiple relationship between the second operating frequency and the first operating frequency is only an example. In other embodiments, the multiple relationship between the second operating frequency and the first operating frequency may also be three times or four times. , Five times, six times and so on.
处理器12对每个采集图像进行区分,判断采集图像是第一图像还是第二图像。处理器12获取到至少一帧第一图像和至少一帧第二图像后,即可根据第一图像、第二图像以及参考图像计算深度信息。具体地,由于第一图像是在光发射器111未投射激光时采集的,形成第一图像的光线仅包括环境红外光,而第二图像是在光发射器111投射激光时采集的,形成第二图像的光线同时包括环境红外光和光发射器111发射的红外激光,因此,处理器12可以根据第一图像来去除第二图像中的由环境红外光形成的采集图像的部分,从而得到仅由光发射器111发射的红外激光形成的采集图像(即由红外激光形成的散斑图像)。The processor 12 distinguishes each captured image and determines whether the captured image is the first image or the second image. After the processor 12 obtains at least one frame of the first image and at least one frame of the second image, it can calculate the depth information according to the first image, the second image, and the reference image. Specifically, since the first image is collected when the light emitter 111 is not projecting laser light, the light that forms the first image includes only ambient infrared light, and the second image is collected when the light emitter 111 is projecting laser light, forming the first image. The light of the two images includes both the ambient infrared light and the infrared laser emitted by the light emitter 111. Therefore, the processor 12 can remove the part of the collected image formed by the ambient infrared light in the second image according to the first image, so as to obtain The collected image formed by the infrared laser emitted by the light transmitter 111 (ie, the speckle image formed by the infrared laser).
可以理解,环境光中包括与光发射器111发射的激光波长相同的红外光(例如,包含940nm的环境红外光),光接收器112获取采集图像时,这部分红外光也会被光接收器112接收。在场景的亮度较高时,光接收器112接收的光线中环境红外光的占比会增大,导致采集图像中的激光散斑点不明显,从而影响深度图像的计算。本实施方式中,光发射器111与光接收器112以不同的工作频率工作,光接收器112可以采集到仅由环境红外光形成的第一图像以及同时由环境红外光和光发射器111发射的红外激光形成的第二图像,并基于第一图像去除掉第二图像中由环境红外光形成的图像部分,由此能够区分出激光散斑点,并能采用仅由光发射器111发射的红外激光形成的采集图像来计算深度信息,激光散斑匹配不受影响,可以避免深度信息出现部分或全部缺失,从而提升深度信息的精确度。It can be understood that the ambient light includes infrared light with the same wavelength as the laser light emitted by the light transmitter 111 (for example, includes ambient infrared light at 940 nm), and when the light receiver 112 acquires the captured image, this part of the infrared light will also be absorbed by the light receiver. 112 received. When the brightness of the scene is high, the proportion of ambient infrared light in the light received by the light receiver 112 will increase, resulting in inconspicuous laser speckles in the collected image, thereby affecting the calculation of the depth image. In this embodiment, the light transmitter 111 and the light receiver 112 work at different operating frequencies, and the light receiver 112 can collect the first image formed by only ambient infrared light and the first image emitted by the ambient infrared light and the light transmitter 111 at the same time. The second image formed by the infrared laser, and based on the first image, the part of the image formed by the ambient infrared light in the second image is removed, so that the laser speckle can be distinguished, and the infrared laser emitted by only the light emitter 111 can be used The acquired images are formed to calculate the depth information, and the laser speckle matching is not affected, which can avoid partial or complete loss of the depth information, thereby improving the accuracy of the depth information.
在某些实施方式中,步骤0133包括:In some embodiments, step 0133 includes:
01331:根据每一帧采集图像的采集时间确定在采集时间下光发射器111的工作状态;01331: Determine the working state of the light emitter 111 at the acquisition time according to the acquisition time of each frame of the acquired image;
01332:根据工作状态为每一帧采集图像添加图像类型;及01332: Add an image type to each frame of collected images according to the working status; and
01333:根据图像类型区分第一图像与第二图像。01333: distinguish the first image from the second image according to the image type.
请再参阅图14,在某些实施方式中,步骤01331、步骤01332及步骤01333均可以由区分单元253实施。也即是说,区分单元253还可用于根据每一帧采集图像的采集时间确定在采集时间下光发射器111的工作状态;根据工作状态为每一帧采集图像添加图像类型及根据图像类型区分第一图像与第二图像。Please refer to FIG. 14 again. In some embodiments, step 01331, step 01332, and step 01333 can all be implemented by the distinguishing unit 253. In other words, the distinguishing unit 253 can also be used to determine the working status of the light emitter 111 at the time of collection according to the collection time of each frame of the collected image; add the image type to each frame of the collected image according to the working status and distinguish according to the image type The first image and the second image.
请参阅图1及图2,在某些实施方式中,步骤01331、步骤01332及步骤01333均可以由处理器12实施。也即是说,处理器12还可用于根据每一帧采集图像的采集时间确定在采集时间下光发射器111的工作状态;根据工作状态为每一帧采集图像添加图像类型及根据图像类型区分第一图像与第二图像。Referring to FIG. 1 and FIG. 2, in some implementation manners, step 01331, step 01332, and step 01333 may all be implemented by the processor 12. In other words, the processor 12 can also be used to determine the working status of the light emitter 111 at the time of collection according to the collection time of each frame of the collected image; add the image type to each frame of the collected image according to the working status and distinguish according to the image type The first image and the second image.
具体地,处理器12每从光接收器112接收到一帧采集图像,都会为采集图像添加图像类型(stream_type),以便于后续处理中可以根据图像类型区分出第一图像和第二图像。具体地,在光接收器112获取采集图像的期间,处理器12会通过I2C总线实时监测光发射器111的工作状态。处理器12每从光接收器112接收到一帧采集图像,会先获取采集图像的采集时间,再根据采集图像的采集时间来 判断在采集图像的采集时间下光发射器111的工作状态是投射激光还是未投射激光,并基于判断结果为采集图像添加图像类型。其中,采集图像的采集时间可以是光接收器112获取每一帧采集图像的开始时间、结束时间、介于开始时间至结束时间之间的任意一个时间等等。如此,可以实现每一帧采集图像与光发射器111在该帧采集图像获取期间的工作状态(投射激光或未投射激光)的对应,准确区分出采集图像的类型。在一个例子中,图像类型stream_type的结构如表1所示:Specifically, each time the processor 12 receives a frame of the captured image from the light receiver 112, it adds an image type (stream_type) to the captured image, so that the first image and the second image can be distinguished according to the image type in subsequent processing. Specifically, during the period when the optical receiver 112 acquires the captured image, the processor 12 will monitor the working status of the optical transmitter 111 in real time via the I2C bus. Each time the processor 12 receives a frame of captured image from the light receiver 112, it will first acquire the acquisition time of the acquired image, and then determine according to the acquisition time of the acquired image that the working state of the light transmitter 111 is projection The laser is still not projected, and the image type is added to the captured image based on the judgment result. The collection time of the collected image may be the start time, end time, any time between the start time and the end time when the light receiver 112 obtains each frame of the collected image, and so on. In this way, it is possible to realize the correspondence between each frame of the collected image and the working state (laser projected or not) of the light emitter 111 during the acquisition of the frame of the collected image, and accurately distinguish the type of the collected image. In an example, the structure of the image type stream_type is shown in Table 1:
表1Table 1
Figure PCTCN2020088888-appb-000001
Figure PCTCN2020088888-appb-000001
表1中stream为0时,表示此时的数据流为由红外光和/或红外激光形成的图像。light为00时,表示此时的数据流是在没有任何设备投射红外光和/或红外激光(仅有环境红外光)的情形下获取的,那么处理器12可以对采集图像添加000的图像类型,以标识这一采集图像为第一图像。light为01时,表示此时的数据流是在光发射器111投射红外激光(既有环境红外光,又有红外激光)的情形下获取的。处理器12可以对采集图像添加001的图像类型,以标识这一采集图像为第二图像。处理器12后续即可根据stream_type来区分采集图像的图像类型。When stream is 0 in Table 1, it means that the data stream at this time is an image formed by infrared light and/or infrared laser. When light is 00, it means that the data stream at this time is acquired without any equipment projecting infrared light and/or infrared laser (only ambient infrared light), then the processor 12 can add an image type of 000 to the collected image , To identify this captured image as the first image. When light is 01, it means that the data stream at this time is acquired when the light transmitter 111 projects infrared lasers (both ambient infrared light and infrared lasers). The processor 12 may add an image type of 001 to the captured image to identify this captured image as the second image. The processor 12 can then distinguish the image types of the collected images according to stream_type.
在某些实施方式中,处理器12包括第一存储区、第二存储区以及逻辑减电路,逻辑减电路与第一存储区及第二存储区均连接。其中,第一存储区用于存储第一图像,第二存储区用于存储第二图像,逻辑减电路用于处理第一图像和第二图像得到由红外激光形成的散斑图像。具体地,逻辑减电路从第一存储区读取第一图像,从第二存储区读取第二图像,在获取到第一图像和第二图像后,对第一图像和第二图像执行减法处理得到由红外激光形成的散斑图像。逻辑减电路还与处理器12中的深度计算模块(例如,可以是专门用于计算深度的集成电路ASIC等)连接,逻辑减电路将由红外激光形成的散斑图像发送到深度计算模块中,由深度计算模块根据由红外激光形成的散斑图像和参考图像计算深度信息。In some embodiments, the processor 12 includes a first storage area, a second storage area, and a logical subtraction circuit, and the logical subtraction circuit is connected to both the first storage area and the second storage area. The first storage area is used to store the first image, the second storage area is used to store the second image, and the logical subtraction circuit is used to process the first image and the second image to obtain a speckle image formed by infrared lasers. Specifically, the logical subtraction circuit reads the first image from the first storage area, reads the second image from the second storage area, and performs subtraction on the first image and the second image after acquiring the first image and the second image The speckle image formed by infrared laser is processed. The logic subtraction circuit is also connected to the depth calculation module in the processor 12 (for example, it may be an integrated circuit ASIC dedicated to calculating depth, etc.). The logic subtraction circuit sends the speckle image formed by the infrared laser to the depth calculation module. The depth calculation module calculates depth information based on the speckle image formed by the infrared laser and the reference image.
请参阅图16,本申请还提供一个或多个包含计算机可读指令的非易失性计算机可读存储介质200。计算机可读指令被处理器300执行时,使得处理器300执行上述任意一项实施方式所述的控制方法。处理器300可以是图1及图2中的处理器12。Please refer to FIG. 16, this application also provides one or more non-volatile computer-readable storage media 200 containing computer-readable instructions. When the computer-readable instructions are executed by the processor 300, the processor 300 executes the control method described in any one of the foregoing embodiments. The processor 300 may be the processor 12 in FIGS. 1 and 2.
例如,请结合图3,计算机可读指令被处理器300执行时,使得处理器300执行以下步骤:For example, referring to FIG. 3, when the computer-readable instructions are executed by the processor 300, the processor 300 is caused to perform the following steps:
031:控制光发射器111向当前场景发射预定帧数的测试激光;031: Control the light emitter 111 to emit a predetermined number of test lasers to the current scene;
032:控制光接收器112接收由当前场景反射的测试激光;032: Control the optical receiver 112 to receive the test laser reflected by the current scene;
033:依据接收到的测试激光获取当前场景的深度信息;033: Obtain the depth information of the current scene according to the received test laser;
034:判断深度信息中是否存在小于预设的安全距离的深度;及034: Determine whether there is a depth less than the preset safety distance in the depth information; and
035:若是,控制终端10进入安全模式。035: If yes, the control terminal 10 enters the safe mode.
在本说明书的描述中,参考术语“某些实施方式”、“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。In the description of this specification, reference is made to the terms “certain embodiments”, “one embodiment”, “some embodiments”, “exemplary embodiments”, “examples”, “specific examples”, or “some examples”. The description means that a specific feature, structure, material, or characteristic described in conjunction with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, the schematic representations of the above-mentioned terms do not necessarily refer to the same embodiment or example. Moreover, the described specific features, structures, materials, or characteristics can be combined in any one or more embodiments or examples in an appropriate manner.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个所述特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with "first" and "second" may explicitly or implicitly include at least one of the features. In the description of the present application, "a plurality of" means at least two, for example, two, three, unless otherwise specifically defined.
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型,本申请的范围由权利要求及其等同物限定。Although the embodiments of the present application have been shown and described above, it can be understood that the above-mentioned embodiments are exemplary and should not be construed as limiting the present application. A person of ordinary skill in the art can comment on the foregoing within the scope of the present application. The embodiments undergo changes, modifications, substitutions and modifications, and the scope of this application is defined by the claims and their equivalents.

Claims (22)

  1. 一种终端的控制方法,所述终端包括深度相机,所述深度相机包括光发射器及光接收器,其特征在于,所述控制方法包括:A control method of a terminal, the terminal includes a depth camera, and the depth camera includes a light transmitter and a light receiver, wherein the control method includes:
    控制所述光发射器向当前场景发射预定帧数的测试激光;Controlling the light emitter to emit a predetermined number of test lasers to the current scene;
    控制所述光接收器接收由当前场景反射的测试激光;Controlling the optical receiver to receive the test laser reflected by the current scene;
    依据接收到的测试激光获取当前场景的深度信息;Obtain the depth information of the current scene according to the received test laser;
    判断所述深度信息中是否存在小于预设的安全距离的深度;及Determine whether there is a depth less than a preset safety distance in the depth information; and
    若是,控制所述终端进入安全模式。If yes, control the terminal to enter the safe mode.
  2. 根据权利要求1所述的控制方法,其特征在于,所述控制方法还包括:The control method according to claim 1, wherein the control method further comprises:
    若所述深度信息中不存在小于预设的安全距离的深度,控制所述终端以设定的模式获取当前场景的深度信息;If there is no depth less than the preset safety distance in the depth information, control the terminal to acquire the depth information of the current scene in a set mode;
    判断以设定的模式获取的深度信息中是否存在小于预设的安全距离的深度;及Determine whether there is a depth less than the preset safety distance in the depth information obtained in the set mode; and
    若是,控制所述终端进入安全模式。If yes, control the terminal to enter the safe mode.
  3. 根据权利要求1所述的控制方法,其特征在于,所述控制所述终端进入安全模式,包括:The control method according to claim 1, wherein said controlling said terminal to enter a safe mode comprises:
    控制所述终端发出提示信号;及/或Control the terminal to send out a reminder signal; and/or
    控制所述光发射器以预设的安全频率发射激光;及/或Controlling the light emitter to emit laser light at a preset safe frequency; and/or
    控制所述光发射器以预设的安全幅值发射激光。The light emitter is controlled to emit laser light at a preset safe amplitude.
  4. 根据权利要求1所述的控制方法,其特征在于,所述控制方法还包括:The control method according to claim 1, wherein the control method further comprises:
    依据所述深度信息判断当前场景是否存在人眼;及Judging whether there are human eyes in the current scene according to the depth information; and
    若是,执行所述判断所述深度信息中是否存在小于预设的安全距离的深度的步骤。If yes, perform the step of judging whether there is a depth less than a preset safety distance in the depth information.
  5. 根据权利要求1所述控制方法,其特征在于,所述预定帧数包括至少两帧,所述依据接收到的测试激光获取当前场景的深度信息,包括:The control method according to claim 1, wherein the predetermined number of frames includes at least two frames, and the obtaining the depth information of the current scene according to the received test laser includes:
    依据接收到的前一帧测试激光获取当前场景的第一深度信息;Acquire the first depth information of the current scene according to the received test laser of the previous frame;
    依据接收到的后一帧测试激光获取当前场景的第二深度信息;及Obtain the second depth information of the current scene according to the received test laser of the next frame; and
    依据所述第一深度信息、所述第二深度信息、所述前一帧测试激光的发射时间及所述后一帧测试激光的发射时间计算所述光发射器发射下一帧激光时当前场景的深度信息。Calculate the current scene when the light emitter emits the next frame of laser light based on the first depth information, the second depth information, the emission time of the previous frame of test laser light, and the emission time of the next frame of test laser light In-depth information.
  6. 根据权利要求2所述的控制方法,其特征在于,所述控制所述终端以设定的模式获取当前场景的深度信息,包括:The control method according to claim 2, wherein the controlling the terminal to obtain the depth information of the current scene in a set mode comprises:
    控制所述光发射器以第一工作频率向当前场景发射激光;Controlling the light emitter to emit laser light to the current scene at the first operating frequency;
    控制所述光接收器以第二工作频率获取采集图像,所述第二工作频率大于所述第一工作频率;Controlling the optical receiver to acquire a captured image at a second operating frequency, where the second operating frequency is greater than the first operating frequency;
    在所述采集图像中区分出在所述光发射器未发射激光时采集的第一图像及在所述光发射器发射激光时采集的第二图像;和Distinguish between the first image collected when the light emitter is not emitting laser light and the second image collected when the light emitter emits laser light; and
    根据所述第一图像、所述第二图像及参考图像计算深度信息。Calculate depth information according to the first image, the second image, and the reference image.
  7. 根据权利要求6所述的控制方法,其特征在于,所述在所述采集图像中区分出在所述光发射器未发射激光时采集的第一图像及在所述光发射器发射激光时采集的第二图像,包括:The control method according to claim 6, wherein the first image collected when the light emitter is not emitting laser light and the first image collected when the light emitter emits laser light are distinguished from the collected images. The second image includes:
    根据每一帧所述采集图像的采集时间确定在所述采集时间下所述光发射器的工作状态;Determining the working state of the light emitter at the acquisition time according to the acquisition time of the acquired image in each frame;
    根据所述工作状态为每一帧所述采集图像添加图像类型;及Adding an image type to each frame of the collected image according to the working state; and
    根据所述图像类型区分所述第一图像与所述第二图像。The first image and the second image are distinguished according to the image type.
  8. 一种终端的控制装置,所述终端包括深度相机,所述深度相机包括光发射器及光接收器,其特征在于,所述控制装置包括:A control device of a terminal, the terminal includes a depth camera, the depth camera includes a light transmitter and a light receiver, and is characterized in that the control device includes:
    第一控制模块,用于控制所述光发射器向当前场景发射预定帧数的测试激光;The first control module is configured to control the light emitter to emit a predetermined number of test lasers to the current scene;
    第二控制模块,用于控制所述光接收器接收由当前场景反射的测试激光;The second control module is configured to control the optical receiver to receive the test laser reflected by the current scene;
    获取模块,用于依据接收到的测试激光获取当前场景的深度信息;The acquisition module is used to acquire the depth information of the current scene according to the received test laser;
    第一判断模块,用于判断所述深度信息中是否存在小于预设的安全距离的深度;及The first judgment module is used to judge whether there is a depth less than a preset safety distance in the depth information; and
    第三控制模块,用于若所述深度信息中存在小于预设的安全距离的深度,控制所述终端进入安全模式。The third control module is configured to control the terminal to enter a safe mode if there is a depth less than a preset safe distance in the depth information.
  9. 根据权利要求8所述的控制装置,其特征在于,The control device according to claim 8, wherein:
    所述第三控制模块还用于若所述深度信息中不存在小于预设的安全距离的深度,控制所述终端以设定的模式获取当前场景的深度信息;The third control module is further configured to control the terminal to acquire the depth information of the current scene in a set mode if there is no depth less than a preset safety distance in the depth information;
    所述第一判断模块还用于判断以设定的模式获取的深度信息中是否存在小于预设的安全距离的深度;The first judgment module is also used to judge whether there is a depth less than a preset safety distance in the depth information acquired in the set mode;
    所述第三控制模块还用于若以设定的模式获取的深度信息中存在小于预设的安全距离的深度,控制所述终端进入安全模式。The third control module is further configured to control the terminal to enter a safe mode if there is a depth less than a preset safe distance in the depth information obtained in the set mode.
  10. 根据权利要求8所述的控制装置,其特征在于,所述控制所述终端进入安全模式,包括:The control device according to claim 8, wherein said controlling said terminal to enter a safe mode comprises:
    控制所述终端发出提示信号;及/或Control the terminal to send out a reminder signal; and/or
    控制所述光发射器以预设的安全频率发射激光;及/或Controlling the light emitter to emit laser light at a preset safe frequency; and/or
    控制所述光发射器以预设的安全幅值发射激光。The light emitter is controlled to emit laser light at a preset safe amplitude.
  11. 根据权利要求8所述的控制装置,其特征在于,所述控制装置还包括第二判断模块,所述第二判断模块用于依据所述深度信息判断当前场景是否存在人眼;The control device according to claim 8, wherein the control device further comprises a second judgment module, and the second judgment module is configured to judge whether there are human eyes in the current scene according to the depth information;
    在所述第二判断模块依据所述深度信息判断当前场景存在人眼时,所述第一判断模块用于判断所述深度信息中是否存在小于预设的安全距离的深度。When the second judgment module judges that there are human eyes in the current scene according to the depth information, the first judgment module is used to judge whether there is a depth less than a preset safe distance in the depth information.
  12. 根据权利要求8所述的控制装置,其特征在于,所述预定帧数包括至少两帧,所述获取模块包括:The control device according to claim 8, wherein the predetermined number of frames includes at least two frames, and the acquisition module includes:
    第一获取单元,用于依据接收到的前一帧测试激光获取当前场景的第一深度信息;The first acquiring unit is configured to acquire the first depth information of the current scene according to the received test laser of the previous frame;
    第一获取单元,用于依据接收到的后一帧测试激光获取当前场景的第二深度信息;及The first acquiring unit is configured to acquire the second depth information of the current scene according to the received test laser of the next frame; and
    第一计算单元,用于依据所述第一深度信息、所述第二深度信息、所述前一帧测试激光的发射时间及所述后一帧测试激光的发射时间计算所述光发射器发射下一帧激光时当前场景的深度信息。The first calculation unit is configured to calculate the emission of the light emitter according to the first depth information, the second depth information, the emission time of the previous frame of test laser light, and the emission time of the next frame of test laser light The depth information of the current scene in the next frame of laser.
  13. 根据权利要求9所述的控制装置,其特征在于,所述第三控制模块包括:The control device according to claim 9, wherein the third control module comprises:
    第一控制单元,用于控制所述光发射器以第一工作频率向当前场景发射激光;The first control unit is configured to control the light transmitter to emit laser light to the current scene at the first operating frequency;
    第二控制单元,用于控制所述光接收器以第二工作频率获取采集图像,所述第二工作频率大于所述第一工作频率;A second control unit, configured to control the optical receiver to acquire a captured image at a second operating frequency, where the second operating frequency is greater than the first operating frequency;
    区分单元,用于在所述采集图像中区分出在所述光发射器未发射激光时采集的第一图像及在所述光发射器发射激光时采集的第二图像;及A distinguishing unit for distinguishing, in the collected images, a first image collected when the light emitter is not emitting laser light and a second image collected when the light emitter is emitting laser light; and
    第二计算单元,用于根据所述第一图像、所述第二图像及参考图像计算深度信息。The second calculation unit is configured to calculate depth information according to the first image, the second image, and the reference image.
  14. 根据权利要求13所述的控制装置,其特征在于,所述区分单元用于:The control device according to claim 13, wherein the distinguishing unit is configured to:
    根据每一帧所述采集图像的采集时间确定在所述采集时间下所述光发射器的工作状态;Determining the working state of the light emitter at the acquisition time according to the acquisition time of the acquired image in each frame;
    根据所述工作状态为每一帧所述采集图像添加图像类型;及Adding an image type to each frame of the collected image according to the working state; and
    根据所述图像类型区分所述第一图像与所述第二图像。The first image and the second image are distinguished according to the image type.
  15. 一种终端,其特征在于,包括深度相机及处理器,所述深度相机包括光发射器及光接收器,所述处理器用于:A terminal, characterized in that it includes a depth camera and a processor, the depth camera includes a light transmitter and a light receiver, and the processor is configured to:
    控制所述光发射器向当前场景发射预定帧数的测试激光;Controlling the light emitter to emit a predetermined number of test lasers to the current scene;
    控制所述光接收器接收由当前场景反射的测试激光;Controlling the optical receiver to receive the test laser reflected by the current scene;
    依据接收到的测试激光获取当前场景的深度信息;Obtain the depth information of the current scene according to the received test laser;
    判断所述深度信息中是否存在小于预设的安全距离的深度;及Determine whether there is a depth less than a preset safety distance in the depth information; and
    若是,控制所述终端进入安全模式。If yes, control the terminal to enter the safe mode.
  16. 根据权利要求15所述的终端,其特征在于,所述处理器还用于:The terminal according to claim 15, wherein the processor is further configured to:
    若所述深度信息中不存在小于预设的安全距离的深度,控制所述终端以设定的模式获取当前场景的深度信息;If there is no depth less than the preset safety distance in the depth information, control the terminal to acquire the depth information of the current scene in a set mode;
    判断以设定的模式获取的深度信息中是否存在小于预设的安全距离的深度;及Determine whether there is a depth less than the preset safety distance in the depth information obtained in the set mode; and
    若是,控制所述终端进入安全模式。If yes, control the terminal to enter the safe mode.
  17. 根据权利要求15所述的终端,其特征在于,所述处理器还用于:The terminal according to claim 15, wherein the processor is further configured to:
    控制所述终端发出提示信号;及/或Control the terminal to send out a reminder signal; and/or
    控制所述光发射器以预设的安全频率发射激光;及/或Controlling the light emitter to emit laser light at a preset safe frequency; and/or
    控制所述光发射器以预设的安全幅值发射激光。The light emitter is controlled to emit laser light at a preset safe amplitude.
  18. 根据权利要求15所述的终端,其特征在于,所述处理器还用于:The terminal according to claim 15, wherein the processor is further configured to:
    依据所述深度信息判断当前场景是否存在人眼;及Judging whether there are human eyes in the current scene according to the depth information; and
    若是,执行所述判断所述深度信息中是否存在小于预设的安全距离的深度的步骤。If yes, perform the step of judging whether there is a depth less than a preset safety distance in the depth information.
  19. 根据权利要求15所述的终端,其特征在于,所述预定帧数包括至少两帧,所述处理器还用于:The terminal according to claim 15, wherein the predetermined number of frames includes at least two frames, and the processor is further configured to:
    依据接收到的前一帧测试激光获取当前场景的第一深度信息;Acquire the first depth information of the current scene according to the received test laser of the previous frame;
    依据接收到的后一帧测试激光获取当前场景的第二深度信息;及Obtain the second depth information of the current scene according to the received test laser of the next frame; and
    依据所述第一深度信息、所述第二深度信息、所述前一帧测试激光的发射时间及所述后一帧测试激光的发射时间计算所述光发射器发射下一帧激光时当前场景的深度信息。Calculate the current scene when the light emitter emits the next frame of laser light based on the first depth information, the second depth information, the emission time of the previous frame of test laser light, and the emission time of the next frame of test laser light In-depth information.
  20. 根据权利要求16所述的终端,其特征在于,所述处理器还用于:The terminal according to claim 16, wherein the processor is further configured to:
    控制所述光发射器以第一工作频率向当前场景发射激光;Controlling the light emitter to emit laser light to the current scene at the first operating frequency;
    控制所述光接收器以第二工作频率获取采集图像,所述第二工作频率大于所述第一工作频率;Controlling the optical receiver to acquire a captured image at a second operating frequency, where the second operating frequency is greater than the first operating frequency;
    在所述采集图像中区分出在所述光发射器未发射激光时采集的第一图像及在所述光发射器发射激光时采集的第二图像;和Distinguish between the first image collected when the light emitter is not emitting laser light and the second image collected when the light emitter emits laser light; and
    根据所述第一图像、所述第二图像及参考图像计算深度信息。Calculate depth information according to the first image, the second image, and the reference image.
  21. 根据权利要求20所述的终端,其特征在于,所述处理器还用于:The terminal according to claim 20, wherein the processor is further configured to:
    根据每一帧所述采集图像的采集时间确定在所述采集时间下所述光发射器的工作状态;Determining the working state of the light emitter at the acquisition time according to the acquisition time of the acquired image in each frame;
    根据所述工作状态为每一帧所述采集图像添加图像类型;及Adding an image type to each frame of the collected image according to the working state; and
    根据所述图像类型区分所述第一图像与所述第二图像。The first image and the second image are distinguished according to the image type.
  22. 一个或多个包含计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被处理器执行时,使得所述处理器执行权利要求1-7任意一项所述的控制方法。One or more non-volatile computer-readable storage media containing computer-readable instructions, which when executed by a processor, cause the processor to execute the control described in any one of claims 1-7 method.
PCT/CN2020/088888 2019-05-30 2020-05-07 Control method and control device for terminal, terminal, and computer readable storage medium WO2020238569A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910465376.2 2019-05-30
CN201910465376.2A CN110198409B (en) 2019-05-30 2019-05-30 Terminal control method and control device, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020238569A1 true WO2020238569A1 (en) 2020-12-03

Family

ID=67753566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/088888 WO2020238569A1 (en) 2019-05-30 2020-05-07 Control method and control device for terminal, terminal, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN110198409B (en)
WO (1) WO2020238569A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113030107A (en) * 2021-03-08 2021-06-25 深圳中科飞测科技股份有限公司 Detection method, detection system, and non-volatile computer-readable storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110198409B (en) * 2019-05-30 2021-07-09 Oppo广东移动通信有限公司 Terminal control method and control device, terminal and computer readable storage medium
CN112526485B (en) * 2019-09-18 2024-04-09 Oppo广东移动通信有限公司 Fault detection method and device, equipment and storage medium
CN113126111B (en) * 2019-12-30 2024-02-09 Oppo广东移动通信有限公司 Time-of-flight module and electronic device
CN111487632A (en) * 2020-04-06 2020-08-04 深圳蚂里奥技术有限公司 Laser safety control device and control method
CN111580125B (en) * 2020-05-28 2022-09-09 Oppo广东移动通信有限公司 Time-of-flight module, control method thereof and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012129252A1 (en) * 2011-03-24 2012-09-27 Eastman Kodak Company Digital 3d camera using periodic illumination
CN107863678A (en) * 2017-09-27 2018-03-30 深圳奥比中光科技有限公司 Laser safety control method and device based on range sensor
CN108281880A (en) * 2018-02-27 2018-07-13 广东欧珀移动通信有限公司 Control method, control device, terminal, computer equipment and storage medium
CN109066288A (en) * 2018-05-30 2018-12-21 Oppo广东移动通信有限公司 Control system, the control method of terminal and laser projecting apparatus of laser projecting apparatus
CN110198409A (en) * 2019-05-30 2019-09-03 Oppo广东移动通信有限公司 Control method and control device, the terminal and computer readable storage medium of terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10419703B2 (en) * 2014-06-20 2019-09-17 Qualcomm Incorporated Automatic multiple depth cameras synchronization using time sharing
CN107682607B (en) * 2017-10-27 2019-10-22 Oppo广东移动通信有限公司 Image acquiring method, device, mobile terminal and storage medium
CN109194856A (en) * 2018-09-30 2019-01-11 Oppo广东移动通信有限公司 The control method and electronic device of electronic device
CN109598744B (en) * 2018-11-29 2020-12-08 广州市百果园信息技术有限公司 Video tracking method, device, equipment and storage medium
CN109688340A (en) * 2019-01-25 2019-04-26 Oppo广东移动通信有限公司 Time for exposure control method, device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012129252A1 (en) * 2011-03-24 2012-09-27 Eastman Kodak Company Digital 3d camera using periodic illumination
CN107863678A (en) * 2017-09-27 2018-03-30 深圳奥比中光科技有限公司 Laser safety control method and device based on range sensor
CN108281880A (en) * 2018-02-27 2018-07-13 广东欧珀移动通信有限公司 Control method, control device, terminal, computer equipment and storage medium
CN109066288A (en) * 2018-05-30 2018-12-21 Oppo广东移动通信有限公司 Control system, the control method of terminal and laser projecting apparatus of laser projecting apparatus
CN110198409A (en) * 2019-05-30 2019-09-03 Oppo广东移动通信有限公司 Control method and control device, the terminal and computer readable storage medium of terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113030107A (en) * 2021-03-08 2021-06-25 深圳中科飞测科技股份有限公司 Detection method, detection system, and non-volatile computer-readable storage medium

Also Published As

Publication number Publication date
CN110198409A (en) 2019-09-03
CN110198409B (en) 2021-07-09

Similar Documents

Publication Publication Date Title
WO2020238569A1 (en) Control method and control device for terminal, terminal, and computer readable storage medium
WO2020259334A1 (en) Adjustment method, adjustment apparatus, terminal and computer-readable storage medium
CN108769509B (en) Control method, apparatus, electronic equipment and the storage medium of camera
KR102380335B1 (en) Scanning laser planarity detection
US9413939B2 (en) Apparatus and method for controlling a camera and infrared illuminator in an electronic device
JP4537255B2 (en) Imaging apparatus and imaging method
US11335028B2 (en) Control method based on facial image, related control device, terminal and computer device
WO2020248896A1 (en) Adjustment method, terminal, and computer-readable storage medium
CN110213480A (en) A kind of focusing method and electronic equipment
US20120108291A1 (en) Image pickup apparatus and mobile phone equipped therewith
WO2020001041A1 (en) Depth processor and three-dimensional image device
CN110072044B (en) Depth camera control method and device, terminal and readable storage medium
CN110062145B (en) Depth camera, electronic device and image acquisition method
US20150309390A1 (en) Image pickup apparatus enabling automatic irradiation direction control, lighting device, image pickup system, automatic irradiation direction control method, and storage medium storing program therefor
WO2020238481A1 (en) Image acquisition method, image acquisition device, electronic device and readable storage medium
CN1126582A (en) Electronic equipment having viewpoint detection apparatus
WO2020087383A1 (en) Image-recognition-based control method and apparatus, and control device
WO2019080907A1 (en) Control method for mobile terminal, device, mobile terminal, and storage medium
CN107925724B (en) Technique for supporting photographing in device having camera and device thereof
CN110245618B (en) 3D recognition device and method
WO2020237657A1 (en) Control method for electronic device, electronic device, and computer-readable storage medium
EP2605505B1 (en) Apparatus and method for controlling a camera and infrared illuminator in an electronic device
WO2020248097A1 (en) Image acquiring method, terminal, computer-readable storage medium
JP2015129876A (en) projector
JP2013074428A (en) Self-photographing determination device, imaging apparatus, program, and self-photographing determination method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20813841

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20813841

Country of ref document: EP

Kind code of ref document: A1