CN109788195B - Electronic equipment and mobile platform - Google Patents

Electronic equipment and mobile platform Download PDF

Info

Publication number
CN109788195B
CN109788195B CN201910007545.8A CN201910007545A CN109788195B CN 109788195 B CN109788195 B CN 109788195B CN 201910007545 A CN201910007545 A CN 201910007545A CN 109788195 B CN109788195 B CN 109788195B
Authority
CN
China
Prior art keywords
time
initial depth
light
application processor
depth image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910007545.8A
Other languages
Chinese (zh)
Other versions
CN109788195A (en
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910007545.8A priority Critical patent/CN109788195B/en
Publication of CN109788195A publication Critical patent/CN109788195A/en
Application granted granted Critical
Publication of CN109788195B publication Critical patent/CN109788195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses an electronic device and a mobile platform. The electronic device includes a body and a plurality of time-of-flight components disposed on the body. The plurality of time-of-flight components are respectively located at a plurality of different orientations of the body. Each time-of-flight component includes an optical transmitter and an optical receiver. The light emitter is used for emitting laser pulses to the outside of the body, and the light receiver is used for receiving the laser pulses emitted by the corresponding light emitter reflected by the shot target. And the light emitters in the time-of-flight components in the adjacent directions emit laser pulses in a time-sharing manner, and the light receivers in the time-of-flight components in the adjacent directions expose in a time-sharing manner to acquire the panoramic depth image. In the electronic equipment and the mobile platform in the embodiment of the application, the light emitters positioned at the adjacent positions of the body emit laser in a time-sharing mode, and the light receivers expose in a time-sharing mode so as to obtain the panoramic depth image, and more comprehensive depth information can be obtained at one time.

Description

Electronic equipment and mobile platform
Technical Field
The present application relates to the field of image acquisition technologies, and more particularly, to an electronic device and a mobile platform.
Background
In order to diversify the functions of the electronic device, a depth image acquiring device may be provided on the electronic device to acquire a depth image of a subject. However, the current depth image acquiring device can acquire only a depth image in one direction or one angle range, and the acquired depth information is less.
Disclosure of Invention
The embodiment of the application provides electronic equipment and a mobile platform.
The electronic equipment comprises a body and a plurality of time-of-flight components arranged on the body, wherein the plurality of time-of-flight components are respectively positioned at a plurality of different orientations of the body, each time-of-flight component comprises a light emitter and a light receiver, the light emitter is used for emitting laser pulses to the outside of the body, and the light receiver is used for receiving the laser pulses reflected by a shot target and emitted by the corresponding light emitter; and the light emitters in the time-of-flight components in the adjacent directions transmit the laser pulses in a time-sharing manner, and the light receivers in the time-of-flight components in the adjacent directions expose in a time-sharing manner to acquire a panoramic depth image.
The mobile platform comprises a body and a plurality of time-of-flight components arranged on the body, wherein the plurality of time-of-flight components are respectively positioned at a plurality of different orientations of the body, each time-of-flight component comprises a light emitter and a light receiver, the light emitter is used for emitting laser pulses to the outside of the body, and the light receiver is used for receiving the laser pulses reflected by a shot target and emitted by the corresponding light emitter; and the light emitters in the time-of-flight components in the adjacent directions transmit the laser pulses in a time-sharing manner, and the light receivers in the time-of-flight components in the adjacent directions expose in a time-sharing manner to acquire a panoramic depth image.
In the electronic equipment and the mobile platform in the embodiment of the application, the light emitters positioned at the adjacent positions of the body emit laser in a time-sharing mode, and the light receivers expose in a time-sharing mode so as to obtain the panoramic depth image, and more comprehensive depth information can be obtained at one time.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic structural diagram of an electronic device according to some embodiments of the present application;
FIG. 2 is a block diagram of an electronic device according to some embodiments of the present application;
FIG. 3 is a schematic diagram of the time at which a plurality of light emitters time-share emit laser pulses and the time at which a plurality of light receivers time-share expose according to some embodiments of the present application;
FIGS. 4(a) and 4(b) are schematic diagrams of the timing of time-shared emission of laser pulses by multiple light emitters and the timing of time-shared exposure by multiple light receivers of certain embodiments of the present application;
FIGS. 5(a) and 5(b) are schematic diagrams of the timing of time-shared emission of laser pulses by multiple light emitters and the timing of time-shared exposure by multiple light receivers of certain embodiments of the present application;
6(a) -6 (c) are schematic diagrams of the time of time-shared emission of laser pulses by multiple light emitters and the time of time-shared exposure by multiple light receivers according to some embodiments of the present application;
FIG. 7 is a graphical illustration of the timing of time-shared emission of laser pulses by adjacent oriented light emitters and time-shared exposure of adjacent oriented light receivers in accordance with certain embodiments of the present application;
FIG. 8 is a block diagram of an electronic device according to some embodiments of the present application;
FIG. 9 is a schematic diagram of an application scenario of an electronic device according to some embodiments of the present application;
FIG. 10 is a schematic diagram of a coordinate system for initial depth image stitching according to some embodiments of the present application;
fig. 11 to 15 are schematic views of application scenarios of an electronic device according to some embodiments of the present application;
fig. 16-19 are schematic structural views of a mobile platform according to some embodiments of the present disclosure.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. The embodiments of the present application described below in conjunction with the drawings are exemplary only and should not be construed as limiting the present application.
Referring to fig. 1 and 2 together, an electronic device 100 according to an embodiment of the present disclosure includes a body 10, a time-of-flight assembly 20, a camera assembly 30, a microprocessor 40, and an application processor 50.
The body 10 includes a plurality of different orientations. For example, in fig. 1, the body 10 can have four different orientations, in the clockwise direction: the device comprises a first direction, a second direction, a third direction and a fourth direction, wherein the first direction is opposite to the third direction, and the second direction is opposite to the fourth direction. The first direction is a direction corresponding to the right side of the body 10, the second direction is a direction corresponding to the lower side of the body 10, the third direction is a direction corresponding to the left side of the body 10, and the fourth direction is a direction corresponding to the upper side of the body 10.
The time of flight assembly 20 is disposed on the body 10. The number of time of flight assemblies 20 may be plural, with a plurality of time of flight assemblies 20 being located in a plurality of different orientations of the body 10. Specifically, the number of time-of-flight components 20 may be four, respectively time-of- flight components 20a, 20b, 20c and 20 d. Time-of-flight assembly 20a is disposed in a first orientation, time-of-flight assembly 20b is disposed in a second orientation, time-of-flight assembly 20c is disposed in a third orientation, and time-of-flight assembly 20d is disposed in a fourth orientation. Of course, the number of time-of-flight assemblies 20 may also be eight (or any other number greater than two, particularly any number greater than four), and two (or any other number) time-of-flight assemblies 20 may be provided for each of the first, second, third and fourth orientations. The embodiment of the present application is described by taking the number of the time-of-flight components 20 as four as an example, it can be understood that the four time-of-flight components 20 can achieve obtaining of the panoramic depth image (the panoramic depth image means that the field angle of the panoramic depth image is greater than or equal to 180 degrees, for example, the field angle of the panoramic depth image may be 180 degrees, 240 degrees, 360 degrees, 480 degrees, 720 degrees, and the like), and is beneficial to saving the manufacturing cost of the electronic device 100, reducing the volume and power consumption of the electronic device 100, and the like. The electronic device 100 of the present embodiment may be a portable electronic device such as a mobile phone, a tablet computer, and a notebook computer, which is provided with a plurality of time-of-flight components 20, and in this case, the main body 10 may be a mobile phone body, a tablet computer body, a notebook computer body, and the like.
Each time of flight assembly 20 includes an optical transmitter 22 and an optical receiver 24. The light emitter 22 is used for emitting laser pulses to the outside of the body 10, and the light receiver 24 is used for receiving the laser pulses emitted by the corresponding light emitter 22 reflected by the object to be shot. Specifically, time-of-flight component 20a includes an optical emitter 22a and an optical receiver 24a, time-of-flight component 20b includes an optical emitter 22b and an optical receiver 24b, time-of-flight component 20c includes an optical emitter 22c and an optical receiver 24c, and time-of-flight component 20d includes an optical emitter 22d and an optical receiver 24 d. The light emitter 22a, the light emitter 22b, the light emitter 22c, and the light emitter 22d are respectively configured to emit laser pulses to a first direction, a second direction, a third direction, and a fourth direction outside the body 10, and the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d are respectively configured to receive laser pulses emitted by the light emitter 22a reflected by a subject in the first direction, laser pulses emitted by the light emitter 22b reflected by a subject in the second direction, laser pulses emitted by the light emitter 22c reflected by a subject in the third direction, and laser pulses emitted by the light emitter 22d reflected by a subject in the fourth direction, so as to cover different areas outside the body 10, and compared with the existing electronic device 100 that needs to rotate 360 degrees to acquire more comprehensive depth information, the electronic device 100 of the present embodiment can not rotate and can acquire more comprehensive depth information at one time, the implementation is simple and the response speed is rapid.
The angle of field of each optical transmitter 22 and each optical receiver 24 is any value from 80 degrees to 100 degrees. In the following description, the angle of view of the optical receiver 24 is taken as an example, and the angle of view of the optical transmitter 22 may be the same as or approximately the same as the angle of view of the corresponding optical receiver 24, and the description thereof will not be repeated.
In one embodiment, the field angles of light receiver 24a, light receiver 24b, light receiver 24c, and light receiver 24d are all 80 degrees. When the field angle of the optical receiver 24 does not exceed 80 degrees, the lens distortion is small, the quality of the obtained initial depth image is good, the quality of the obtained panoramic depth image is good, and more accurate depth information can be obtained.
In one embodiment, the sum of the field angles of optical receiver 24a, optical receiver 24b, optical receiver 24c, and optical receiver 24d is equal to 360 degrees. Specifically, the angles of view of the optical receiver 24a, the optical receiver 24b, the optical receiver 24c and the optical receiver 24d may all be 90 degrees, and the angles of view of the four optical receivers 24 do not overlap with each other, so as to achieve acquisition of a 360-degree or approximately 360-degree panoramic depth image. Alternatively, the field angle of the optical receiver 24a may be 80 degrees, the field angle of the optical receiver 24b may be 100 degrees, the field angle of the optical receiver 24c may be 80 degrees, the field angle of the optical receiver 24d may be 100 degrees, and the like, and the four optical receivers 24 may obtain a 360-degree or approximately 360-degree panoramic depth image through angle complementation.
In one embodiment, the sum of the field angles of light receivers 24a, 24b, 24c, and 24d is greater than 360 degrees, and the field angles of at least two of the four light receivers 24 overlap each other. Specifically, the angles of view of the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d may all be 100 degrees, and the angles of view between two of the four light receivers 24 overlap each other. When the panoramic depth image is obtained, the edge overlapping parts of the four initial depth images can be identified, and then the four initial depth images are spliced into the 360-degree panoramic depth image. Since the field angles of the four optical receivers 24 overlap each other, it can be ensured that the acquired panoramic depth image covers 360 degrees of depth information outside the body 10.
Of course, the specific value of the field angle of each optical receiver 24 (and each optical transmitter 22) is not limited to the above example, and those skilled in the art can set the field angle of the optical receiver 24 (and the optical transmitter 22) to any value between 80 degrees and 100 degrees as required, for example: the field angle of the optical receiver 24 is 80 degrees, 82 degrees, 84 degrees, 86 degrees, 90 degrees, 92 degrees, 94 degrees, 96 degrees, 98 degrees, 100 degrees or any value therebetween, and the field angle of the optical transmitter 22 is 80 degrees, 82 degrees, 84 degrees, 86 degrees, 90 degrees, 92 degrees, 94 degrees, 96 degrees, 98 degrees, 100 degrees or any value therebetween, which is not limited herein.
With continued reference to fig. 1 and 2, generally, the laser pulses emitted by the adjacent light emitters 22 are likely to interfere with each other, and the laser pulses emitted by the light emitters 22 in the opposite direction are not likely to interfere with each other. Therefore, in order to avoid such mutual interference and improve the accuracy of the acquired depth information, the adjacent directional light emitters 22 may time-divisionally emit laser pulses, and correspondingly, the adjacent directional light receivers 24 may also time-divisionally expose to acquire a panoramic depth image. Specifically, the light emitter 22a in the first direction emits a laser pulse in a time-sharing manner with the light emitter 22b in the second direction, the light emitter 22a in the first direction emits a laser pulse in a time-sharing manner with the light emitter 22d in the fourth direction, the light emitter 22c in the third direction emits a laser pulse in a time-sharing manner with the light emitter 22b in the second direction, and the light emitter 22c in the third direction emits a laser pulse in a time-sharing manner with the light emitter 22d in the fourth direction. The light emitter 22a in the first direction and the light emitter 22c in the third direction may emit laser pulses at the same time or at different times; the second directional optical transmitter 22b and the fourth directional optical transmitter 22d may emit laser pulses simultaneously or in a time-sharing manner, which is not limited herein. Similarly, the light receiver 24a in the first direction is exposed in a time-sharing manner with the light receiver 24b in the second direction, the light receiver 24a in the first direction is exposed in a time-sharing manner with the light receiver 24d in the fourth direction, the light receiver 24c in the third direction is exposed in a time-sharing manner with the light receiver 24b in the second direction, and the light receiver 24c in the third direction is exposed in a time-sharing manner with the light receiver 24d in the fourth direction. The light receiver 24a in the first direction and the light receiver 24c in the third direction can be exposed at the same time or in a time-sharing manner; the light receiver 24b in the second orientation and the light receiver 24d in the fourth orientation may be exposed simultaneously, or exposed in a time-sharing manner, which is not limited herein.
Preferably, the optical transmitters 22 in the plurality of time-of-flight components 20 transmit laser pulses in a time-shared manner, and correspondingly, the optical receivers 24 in the plurality of time-of-flight components 20 are also exposed in a time-shared manner to acquire the panoramic depth image. Wherein the optical transmitters 22 in the other time-of-flight components 20 are turned off while the optical receiver 24 in any one time-of-flight component 20 is exposed. Each optical receiver 24 can only receive the laser pulse emitted by the corresponding optical transmitter 22, and does not receive the laser pulses emitted by the other optical transmitters 22, so that the above-mentioned interference problem can be better avoided, and the accuracy of the received laser pulse can be ensured.
Specifically, referring to fig. 3 and 4, in one embodiment, the plurality of light emitters 22 in the plurality of time-of-flight components 20 sequentially and uninterruptedly emit laser pulses, and the exposure time of the light receiver 24 in each time-of-flight component 20 is within the time range of the laser pulses emitted by the light emitter 22. The light emitter 22a, the light emitter 22b, the light emitter 22c, and the light emitter 22d emit the laser pulses in time division, and the light emitter 22b starts emitting the laser pulses immediately from the timing at which the light emitter 22a stops emitting the laser pulses, the light emitter 22c starts emitting the laser pulses immediately from the timing at which the light emitter 22b stops emitting the laser pulses, the light emitter 22d starts emitting the laser pulses immediately from the timing at which the light emitter 22c stops emitting the laser pulses, and the light emitter 22a starts emitting the laser pulses immediately from the timing at which the light emitter 22d stops emitting the laser pulses. The time at which light emitter 22a, light emitter 22b, light emitter 22c, and light emitter 22d emit laser pulses collectively constitute an alternating period T. At this time, the exposure modes of the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d may include the following two types:
(1) the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d are sequentially and continuously exposed. Specifically, the exposure times of the four photoreceivers 24 coincide with the times at which the corresponding phototransmitters 22 emit laser pulses, respectively. As shown in fig. 3, the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d are alternately exposed in sequence. The exposure start time of the optical receiver 24a coincides with the start time of the optical transmitter 22a emitting the laser pulse of the current alternation period T, and the exposure stop time of the optical receiver 24a coincides with the stop time of the optical transmitter 22a emitting the laser pulse of the current alternation period T; the exposure start time of the optical receiver 24b coincides with the start time of the optical transmitter 22b emitting the laser pulse of the current alternation period T, and the exposure stop time of the optical receiver 24b coincides with the stop time of the optical transmitter 22b emitting the laser pulse of the current alternation period T; the exposure start time of the photoreceiver 24c coincides with the start time of the laser pulse emitted by the phototransmitter 22c of the current alternation period T, and the exposure stop time of the photoreceiver 24c coincides with the stop time of the laser pulse emitted by the phototransmitter 22c of the current alternation period T; the exposure start time of the photoreceiver 24d coincides with the start time of the laser pulse emitted from the phototransmitter 22d of the current alternation period T, and the exposure off time of the photoreceiver 24d coincides with the off time of the laser pulse emitted from the phototransmitter 22d of the current alternation period T. At this time, the optical receiver 24a can receive only the laser pulses emitted by the optical transmitter 22a, but not the laser pulses emitted by the optical transmitters 22b, 22c, and 22 d; the optical receiver 24b can only receive the laser pulses emitted by the optical transmitter 22b, but not the laser pulses emitted by the optical transmitters 22a, 22c and 22 d; the optical receiver 24c can only receive the laser pulses emitted by the optical transmitter 22c, but not the laser pulses emitted by the optical transmitters 22a, 22b and 22 d; the optical receiver 24d can only receive the laser pulses emitted by the optical transmitter 22d, but not the laser pulses emitted by the optical transmitters 22a, 22b, and 22 c. In the control mode of sequentially and continuously exposing the light receiver 24a, the light receiver 24b, the light receiver 24c and the light receiver 24d, the light receiver 24a and the light emitter 22a are synchronously controlled, the light receiver 24b and the light emitter 22b are synchronously controlled, the light receiver 24c and the light emitter 22c are synchronously controlled, the light receiver 24d and the light emitter 22d are synchronously controlled, and the control logic is simpler.
(2) As shown in fig. 4, the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d are exposed sequentially and at predetermined time intervals. Wherein the exposure time of at least one of the light receivers 24 is less than the time that the corresponding light emitter 22 emits a laser pulse. Specifically, as shown in fig. 4(a), in one example, the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d are alternately exposed in sequence. The exposure time of the light receiver 24a is less than the time of the light emitter 22a emitting the laser light pulse, the exposure time of the light receiver 24b is equal to the time of the light emitter 22b emitting the laser light pulse, the exposure time of the light receiver 24c is less than the time of the light emitter 22c emitting the laser light pulse, and the exposure time of the light receiver 24d is equal to the time of the light emitter 22d emitting the laser light pulse. The exposure starting time of the optical receiver 24a is greater than the starting time of the optical transmitter 22a emitting the laser pulse in the current alternation period T, and the exposure ending time is less than the ending time of the optical transmitter 22a emitting the laser pulse in the current alternation period T; the exposure start time and the exposure stop time of the photoreceiver 24b coincide with the start time and the stop time, respectively, of the laser pulse emitted by the phototransmitter 22b of the current alternation period T; the exposure start time of the optical receiver 24c is greater than the start time of the optical transmitter 22c emitting the laser pulse in the current alternation period T, and the exposure stop time is less than the stop time of the optical transmitter 22c emitting the laser pulse in the current alternation period T; the exposure start timing and the exposure off timing of the light receiver 24d coincide with the start timing and the off timing, respectively, at which the light emitter 22d of the current alternation period T emits the laser pulse. The light receiver 24a exposure off time and the light receiver 24b exposure start time of the current alternation cycle T are separated by a predetermined time Δ T1, the light receiver 24b exposure off time and the light receiver 24c exposure start time of the current alternation cycle T are separated by a predetermined time Δ T2, the light receiver 24c exposure off time and the light receiver 24d exposure start time of the current alternation cycle T are separated by a predetermined time Δ T3, the light receiver 24d exposure off time and the light receiver 24a exposure start time of the next alternation cycle T are separated by a predetermined time Δ T4, Δ T1, Δ T2, Δ T3 and Δ T4 may all be equal, or all be unequal, or be partially equal, or partially unequal. The optical receiver 24a can only receive the laser pulses emitted by the optical transmitter 22a, the optical receiver 24b can only receive the laser pulses emitted by the optical transmitter 22b, the optical receiver 24c can only receive the laser pulses emitted by the optical transmitter 22c, and the optical receiver 24d can only receive the laser pulses emitted by the optical transmitter 22 d. As shown in fig. 4(b), in another example, the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d are alternately exposed in sequence. The exposure time of the light receiver 24a is shorter than the time of the light emitter 22a emitting the laser light pulse, the exposure time of the light receiver 24b is shorter than the time of the light emitter 22b emitting the laser light pulse, the exposure time of the light receiver 24c is shorter than the time of the light emitter 22c emitting the laser light pulse, and the exposure time of the light receiver 24d is shorter than the time of the light emitter 22d emitting the laser light pulse. The exposure starting time of the optical receiver 24a is greater than the starting time of the optical transmitter 22a emitting the laser pulse in the current alternation period T, and the exposure ending time is less than the ending time of the optical transmitter 22a emitting the laser pulse in the current alternation period T; the exposure starting time of the optical receiver 24b is greater than the starting time of the optical transmitter 22b emitting the laser pulse in the current alternation period T, and the exposure ending time is less than the ending time of the optical transmitter 22b emitting the laser pulse in the current alternation period T; the exposure start time of the optical receiver 24c is greater than the start time of the optical transmitter 22c emitting the laser pulse in the current alternation period T, and the exposure stop time is less than the stop time of the optical transmitter 22c emitting the laser pulse in the current alternation period T; the exposure start time of the light receiver 24d is greater than the start time of the laser pulse emitted from the light emitter 22d of the current alternation period T, and the exposure stop time is less than the stop time of the laser pulse emitted from the light emitter 22d of the current alternation period T. The light receiver 24a exposure off time and the light receiver 24b exposure start time of the current alternation cycle T are separated by a predetermined time Δ T1, the light receiver 24b exposure off time and the light receiver 24c exposure start time of the current alternation cycle T are separated by a predetermined time Δ T2, the light receiver 24c exposure off time and the light receiver 24d exposure start time of the current alternation cycle T are separated by a predetermined time Δ T3, the light receiver 24d exposure off time and the light receiver 24a exposure start time of the next alternation cycle T are separated by a predetermined time Δ T4, Δ T1, Δ T2, Δ T3 and Δ T4 may all be equal, or all be unequal, or be partially equal, or partially unequal. The optical receiver 24a can only receive the laser pulses emitted by the optical transmitter 22a, the optical receiver 24b can only receive the laser pulses emitted by the optical transmitter 22b, the optical receiver 24c can only receive the laser pulses emitted by the optical transmitter 22c, and the optical receiver 24d can only receive the laser pulses emitted by the optical transmitter 22 d. In the control mode of sequentially and sequentially exposing the light receiver 24a, the light receiver 24b, the light receiver 24c and the light receiver 24d at predetermined time intervals, the exposure time of at least one light receiver 24 is shorter than the time of the corresponding light emitter 22 emitting the laser pulse, which is beneficial to reducing the power consumption of the electronic device 100.
In the control mode in which the plurality of light emitters 22 in the plurality of time-of-flight components 20 sequentially and uninterruptedly emit laser pulses, the frame rate at which the time-of-flight components 20 acquire the initial depth image is higher, and the method is suitable for a scene with a higher requirement on the frame rate at which the initial depth image is acquired.
Referring to fig. 5 and 6, in another embodiment, the multiple light emitters 22 in the multiple time-of-flight assemblies 20 sequentially emit laser pulses at predetermined time intervals, that is, the light emitters 22a, 22b, 22c and 22d alternately emit laser pulses, the interval between the cut-off time of the light emitter 22a emitting the laser pulses and the start time of the light emitter 22b emitting the laser pulses in the current alternating period T is a predetermined time Δ T5, the interval between the cut-off time of the light emitter 22b emitting the laser pulses and the start time of the light emitter 22c emitting the laser pulses in the current alternating period T is a predetermined time Δ T6, the interval between the cut-off time of the light emitter 22c emitting the laser pulses and the start time of the light emitter 22d emitting the laser pulses in the current alternating period T is a predetermined time Δ T7, and the cut-off time of the light emitter 22d emitting the laser pulses and the light emitter 22a 353522 The predetermined times Δ T8, Δ T5, Δ T6, Δ T7 and Δ T8 may all be equal, all be unequal, or be partially equal, and be partially unequal, the intervals between the starting times of emitting laser pulses, where the time at which the light emitter 22a, the light emitter 22b, the light emitter 22c and the light emitter 22d emit laser pulses, the predetermined time Δ T5, the predetermined time Δ T6, the predetermined time Δ T7 and the predetermined time Δ T8 together constitute an alternating period T. At this time, the exposure modes of the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d may include the following two types:
(1) the light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d are sequentially and continuously exposed. Specifically, as shown in fig. 5(a), in one example, the exposure start time of the photoreceiver 24a coincides with the start time of the laser pulse emitted from the phototransmitter 22a of the current alternation period T, and the exposure off time coincides with the off time of the laser pulse emitted from the phototransmitter 22a of the current alternation period T; the exposure start time of the photoreceiver 24b coincides with the cut-off time of the laser pulse emitted by the light emitter 22a of the current alternation period T, and the exposure cut-off time coincides with the start time of the laser pulse emitted by the light emitter 22c of the current alternation period T; the exposure start time of the photoreceiver 24c coincides with the start time of the laser pulse emitted by the light emitter 22c of the current alternation period T, and the exposure stop time coincides with the stop time of the laser pulse emitted by the light emitter 22c of the current alternation period T; the exposure start time of the light receiver 24d coincides with the cut-off time at which the light emitter 22c of the current alternation period T emits the laser pulse, and the exposure cut-off time coincides with the start time at which the light emitter 22a of the next alternation period T emits the laser pulse. The optical receiver 24a can only receive the laser pulses emitted by the optical transmitter 22a, the optical receiver 24b can only receive the laser pulses emitted by the optical transmitter 22b, the optical receiver 24c can only receive the laser pulses emitted by the optical transmitter 22c, and the optical receiver 24d can only receive the laser pulses emitted by the optical transmitter 22 d. As shown in fig. 5(b), in another example, the exposure start time of the photoreceiver 24a coincides with the start time of the laser pulse emitted from the phototransmitter 22a of the current alternation period T, and the exposure off time coincides with the start time of the laser pulse emitted from the phototransmitter 22b of the current alternation period T; the exposure start time of the photoreceiver 24b coincides with the start time of the laser pulse emitted by the phototransmitter 22b of the current alternation period T, and the exposure stop time coincides with the start time of the laser pulse emitted by the phototransmitter 22c of the current alternation period T; the exposure start time of the photoreceiver 24c coincides with the start time of the laser pulse emitted by the phototransmitter 22c of the current alternation period T, and the exposure stop time coincides with the start time of the laser pulse emitted by the phototransmitter 22d of the current alternation period T; the exposure start time of the photoreceiver 24d coincides with the start time of the laser pulse emitted from the phototransmitter 22d of the current alternation period T, and the exposure stop time coincides with the start time of the laser pulse emitted from the phototransmitter 22a of the next alternation period T. The optical receiver 24a can only receive the laser pulses emitted by the optical transmitter 22a, the optical receiver 24b can only receive the laser pulses emitted by the optical transmitter 22b, the optical receiver 24c can only receive the laser pulses emitted by the optical transmitter 22c, and the optical receiver 24d can only receive the laser pulses emitted by the optical transmitter 22 d.
(2) The light receiver 24a, the light receiver 24b, the light receiver 24c, and the light receiver 24d are exposed sequentially and at predetermined time intervals. Specifically, as shown in fig. 6(a), in one example, the exposure start time and the exposure off time of the light receiver 24a coincide with the start time and the off time, respectively, of the laser pulse emitted by the light emitter 22a of the current alternation period T; the exposure start time and the exposure stop time of the light receiver 24b are respectively consistent with the start time and the stop time of the laser pulse emitted by the light emitter 22b of the current alternating period T; the exposure start time and the exposure stop time of the photoreceiver 24c coincide with the start time and the stop time of the laser pulse emitted by the phototransmitter 22c of the current alternation period T, respectively; the exposure start timing and the exposure off timing of the light receiver 24d coincide with the start timing and the off timing, respectively, at which the light emitter 22d of the current alternation period T emits the laser pulse. The exposure off time of the light receiver 24a is separated from the exposure start time of the light receiver 24b of the current alternating period T by a predetermined time Δ T5, the exposure off time of the light receiver 24b is separated from the exposure start time of the light receiver 24c of the current alternating period T by a predetermined time Δ T6, the exposure off time of the light receiver 24c is separated from the exposure start time of the light receiver 24d of the current alternating period T by a predetermined time Δ T7, and the exposure off time of the light receiver 24d is separated from the exposure start time of the light receiver 24a of the next alternating period T by a predetermined time Δ T8. Δ t5, Δ t6, Δ t7, and Δ t8 may all be equal, or all be unequal, or be partially equal or partially unequal. The optical receiver 24a can only receive the laser pulses emitted by the optical transmitter 22a, the optical receiver 24b can only receive the laser pulses emitted by the optical transmitter 22b, the optical receiver 24c can only receive the laser pulses emitted by the optical transmitter 22c, and the optical receiver 24d can only receive the laser pulses emitted by the optical transmitter 22 d. As shown in fig. 6(b), in another example, the exposure start time and the exposure off time of the light receiver 24a coincide with the start time and the off time, respectively, of the laser pulse emitted from the light emitter 22a of the current alternation period T; the exposure start time of the optical receiver 24b is less than the start time of the optical transmitter 22b emitting the laser pulse in the current alternation period T, and the exposure stop time is less than the start time of the optical transmitter 22c emitting the laser pulse in the current alternation period T; the exposure start time and the exposure stop time of the photoreceiver 24c coincide with the start time and the stop time of the laser pulse emitted by the phototransmitter 22c of the current alternation period T, respectively; the exposure start time of the light receiver 24d is smaller than the start time of the laser pulse emitted from the light emitter 22d of the current alternation period T, and the exposure stop time is smaller than the start time of the laser pulse emitted from the light emitter 22a of the next alternation period T. The exposure off time of the light receiver 24a is separated from the exposure start time of the light receiver 24b of the current alternating period T by a predetermined time Δ T9, the exposure off time of the light receiver 24b is separated from the exposure start time of the light receiver 24c of the current alternating period T by a predetermined time Δ T10, the exposure off time of the light receiver 24c is separated from the exposure start time of the light receiver 24d of the current alternating period T by a predetermined time Δ T11, and the exposure off time of the light receiver 24d is separated from the exposure start time of the light receiver 24a of the next alternating period T by a predetermined time Δ T12. Δ t9, Δ t10, Δ t11, and Δ t12 may all be equal, or all be unequal, or be partially equal or partially unequal. The optical receiver 24a can only receive the laser pulses emitted by the optical transmitter 22a, the optical receiver 24b can only receive the laser pulses emitted by the optical transmitter 22b, the optical receiver 24c can only receive the laser pulses emitted by the optical transmitter 22c, and the optical receiver 24d can only receive the laser pulses emitted by the optical transmitter 22 d. As shown in fig. 6(c), in yet another example, the exposure start time of the photoreceiver 24a is greater than the cut-off time of the laser pulse emitted by the phototransmitter 22d of the previous alternation period T, and the exposure cut-off time is less than the start time of the laser pulse emitted by the phototransmitter 22b of the current alternation period T; the exposure start timing of the light receiver 24a is greater than the exposure end timing of the light receiver 24d of the previous alternation period T, and the exposure end timing is less than the exposure start timing of the light receiver 24b of the current alternation period T. The exposure starting time of the optical receiver 24b is greater than the cut-off time of the laser pulse emitted by the optical transmitter 22a in the current alternation period T, and the exposure cut-off time is less than the starting time of the laser pulse emitted by the optical transmitter 22c in the current alternation period T; the exposure start timing of the light receiver 24b is greater than the exposure end timing of the light receiver 24a of the current alternation period T, and the exposure end timing is less than the exposure start timing of the light receiver 24c of the current alternation period T. The exposure start time of the optical receiver 24c is greater than the cut-off time of the laser pulse emitted by the optical transmitter 22b of the current alternation period T, and the exposure cut-off time is less than the start time of the laser pulse emitted by the optical transmitter 22d of the current alternation period T; the exposure start timing of the light receiver 24c is greater than the exposure end timing of the light receiver 24b of the current alternation period T, and the exposure end timing is less than the exposure start timing of the light receiver 24d of the current alternation period T. The exposure start time of the optical receiver 24d is greater than the cut-off time of the laser pulse emitted by the optical transmitter 22c in the current alternation period T, and the exposure cut-off time is less than the start time of the laser pulse emitted by the optical transmitter 22a in the next alternation period T; the exposure start timing of the light receiver 24d is greater than the exposure end timing of the light receiver 24c of the current alternation period T, and the exposure end timing is less than the exposure start timing of the light receiver 24a of the next alternation period T. The exposure off time of the light receiver 24a is separated from the exposure start time of the light receiver 24b of the current alternating period T by a predetermined time Δ T9, the exposure off time of the light receiver 24b is separated from the exposure start time of the light receiver 24c of the current alternating period T by a predetermined time Δ T10, the exposure off time of the light receiver 24c is separated from the exposure start time of the light receiver 24d of the current alternating period T by a predetermined time Δ T11, and the exposure off time of the light receiver 24d is separated from the exposure start time of the light receiver 24a of the next alternating period T by a predetermined time Δ T12. Δ t9, Δ t10, Δ t11, and Δ t12 may all be equal, or all be unequal, or be partially equal or partially unequal.
In the control mode in which the plurality of light emitters 22 in the plurality of time-of-flight components 20 sequentially continue to emit laser pulses at predetermined intervals, the frame rate at which the time-of-flight components 20 acquire the initial depth image is low, which is suitable for a scene with a low requirement on the frame rate at which the initial depth image is acquired, and is beneficial to reducing the power consumption of the electronic device 100.
In addition, as described above, when the adjacent directional light emitters 22 emit laser pulses in a time-division manner and the adjacent directional light receivers 24 expose in a time-division manner, the first directional light emitter 22a and the third directional light emitter 22c may emit laser pulses at the same time, the first directional light receiver 24a and the third directional light receiver 24c may expose at the same time, the second directional light emitter 22b and the fourth directional light emitter 22d may emit laser pulses at the same time, and the second directional light receiver 24b and the fourth directional light receiver 24d may expose at the same time. Referring to fig. 7, the time of the laser pulse emitted by the light emitter 22a (i.e. the time of the laser pulse emitted by the light emitter 22 c) and the time of the laser pulse emitted by the light emitter 22b (i.e. the time of the laser pulse emitted by the light emitter 22 d) together form an alternating period T (each of the predetermined times Δ T may also be included in the alternating period T). The optical transmitter 22a, the optical transmitter 22b, the optical transmitter 22c and the optical transmitter 22d can be controlled as two optical transmitters 22, and the optical receiver 24a, the optical receiver 24b, the optical receiver 24c and the optical receiver 24d can also be controlled as two optical receivers 24, and the control manner is explained with reference to the aforementioned fig. 3 to fig. 6 and the corresponding explanation thereof, which is not described in detail herein.
Referring to fig. 1 and 2, a camera assembly 30 is disposed on the body 10. The number of camera assemblies 30 may be multiple, one time-of-flight assembly 20 for each camera assembly 30. For example, when the number of time-of-flight components 20 is four, the number of camera assemblies 30 is also four, and the four camera assemblies 30 are disposed in the first orientation, the second orientation, the third orientation, and the fourth orientation, respectively.
A plurality of camera head assemblies 30 are each connected to an application processor 50. Each camera assembly 30 is used to capture a scene image of a subject and output to the application processor 50. In the present embodiment, the four camera assemblies 30 are respectively used for capturing the scene image of the subject in the first direction, the scene image of the subject in the second direction, the scene image of the subject in the third direction, and the scene image of the subject in the fourth direction, and outputting the captured images to the application processor 50. It will be appreciated that the field angle of each camera assembly 30 is the same or approximately the same as the optical receiver 24 of the corresponding time-of-flight assembly 20 to enable a better match of each scene image with the corresponding initial depth image.
The camera assembly 30 may be a visible light camera 32 or an infrared light camera 34. When camera assembly 30 is a visible light camera 32, the scene image is a visible light image; when camera assembly 30 is an infrared camera 34, the scene image is an infrared light image.
Referring to FIG. 2, the microprocessor 40 may be a processing chip. The number of microprocessors 40 may be plural, one time-of-flight assembly 20 for each microprocessor 40. For example, in the present embodiment, when the number of the flight time components 20 is four, the number of the microprocessors 40 is also four. Each microprocessor 40 is connected to both the optical transmitter 22 and the optical receiver 24 in the corresponding time of flight assembly 20. Each microprocessor 40 can drive the corresponding light emitter 22 to emit laser light through the driving circuit, and the control of the multiple microprocessors 40 can realize that the multiple light emitters 22 emit laser light simultaneously. Each microprocessor 40 is also configured to provide clock information for receiving laser pulses to the corresponding light receiver 24 to expose the light receiver 24, and to effect simultaneous exposure of multiple light receivers 24 under the control of multiple microprocessors 40. Each microprocessor 40 is also configured to derive an initial depth image from the laser pulses emitted by the corresponding light emitter 22 and the laser pulses received by the light receiver 24. For example, the four microprocessors 40 respectively obtain an initial depth image P1 from the laser pulses emitted by the light emitter 22a and received by the light receiver 24a, an initial depth image P2 from the laser pulses emitted by the light emitter 22b and received by the light receiver 24b, an initial depth image P3 from the laser pulses emitted by the light emitter 22c and received by the light receiver 24c, and an initial depth image P4 from the laser pulses emitted by the light emitter 22d and received by the light receiver 24d (as shown in the upper part of fig. 9). Each microprocessor 40 may also perform algorithm processing such as tiling, distortion correction, self-calibration, etc. on the initial depth image to improve the quality of the initial depth image.
In another embodiment, as shown in FIG. 8, the number of microprocessors 40 may also be one. At this point, the microprocessor 40 is simultaneously connected to the optical transmitters 22 and the optical receivers 24 in the plurality of time-of-flight components 20. Specifically, microprocessor 40 is simultaneously connected to optical transmitter 22a, optical receiver 24a, optical transmitter 22b, optical receiver 24b, optical transmitter 22c, optical receiver 24c, optical transmitter 22d, and optical receiver 24 d. The microprocessor 40 can control a plurality of different driving circuits in a time-sharing manner to respectively drive the plurality of light emitters 22 to emit laser pulses, and can also provide clock information for receiving the laser pulses to the plurality of light receivers 24 in a time-sharing manner to enable the plurality of light receivers 24 to expose in a time-sharing manner, and obtain a plurality of initial depth images according to the laser pulses emitted by the plurality of light emitters 22 and the laser pulses received by the plurality of light receivers 24 in sequence. For example, the microprocessor 40 obtains an initial depth image P1 according to the laser pulses emitted by the light emitter 22a and received by the light receiver 24a, obtains an initial depth image P2 according to the laser pulses emitted by the light emitter 22b and received by the light receiver 24b, obtains an initial depth image P3 according to the laser pulses emitted by the light emitter 22c and received by the light receiver 24c, and obtains an initial depth image P4 according to the laser pulses emitted by the light emitter 22d and received by the light receiver 24d (as shown in the upper part of fig. 9). The plurality of microprocessors 40 have a faster processing speed and a smaller delay time than one microprocessor 40. However, one microprocessor 40 is advantageous in reducing the size of the electronic device 100 and in reducing the manufacturing cost of the electronic device 100, compared to a plurality of microprocessors 40.
When there are a plurality of microprocessors 40, each of the plurality of microprocessors 40 is connected to the application processor 50 to transmit the initial depth image to the application processor 50. When one microprocessor 40 is provided, one microprocessor 40 is connected to the application processor 50 to transmit the initial depth image to the application processor 50. In one embodiment, the microprocessor 40 may be connected to the application processor 50 through a Mobile Industry Processor Interface (MIPI), and specifically, the microprocessor 40 is connected to a Trusted Execution Environment (TEE) of the application processor 50 through the Mobile Industry processor interface, so as to directly transmit data (initial depth image) in the microprocessor 40 to the TEE, so as to improve the security of information in the electronic device 100. Here, both the code and the memory area in the trusted Execution Environment are controlled by the access control unit and cannot be accessed by a program in the untrusted Execution Environment (REE), and both the trusted Execution Environment and the untrusted Execution Environment may be formed in the application processor 50.
The application processor 50 may function as a system of the electronic device 100. The application processor 50 may reset the microprocessor 40, wake the microprocessor 40, debug the microprocessor 40, and so on. The application processor 50 may also be connected to a plurality of electronic components of the electronic device 100 and control the plurality of electronic components to operate according to a predetermined mode, for example, the application processor 50 is connected to the visible light camera 32 and the infrared light camera 34 to control the visible light camera 32 and the infrared light camera 34 to capture a visible light image and an infrared light image and process the visible light image and the infrared light image; when the electronic apparatus 100 includes a display screen, the application processor 50 may control the display screen to display a predetermined screen; the application processor 50 may also control an antenna of the electronic device 100 to transmit or receive predetermined data or the like.
Referring to fig. 9, in an embodiment, the application processor 50 is configured to synthesize a plurality of initial depth images acquired by the plurality of microprocessors 40 into a frame of panoramic depth image according to the field angle of the optical receiver 24, or synthesize a plurality of initial depth images sequentially acquired by one microprocessor 40 into a frame of panoramic depth image according to the field angle of the optical receiver 24.
Specifically, referring to fig. 1, a rectangular coordinate system XOY is established with the center of the body 10 as a center O, the horizontal axis as an X axis, and the longitudinal axis as a Y axis, in the rectangular coordinate system XOY, the field of view of the light receiver 24a is located between 45 degrees and 315 degrees (clockwise rotation, the same applies), the field of view of the light receiver 24b is located between 315 degrees and 225 degrees, the field of view of the light receiver 24c is located between 225 degrees and 135 degrees, and the field of view of the light receiver 24d is located between 135 degrees and 45 degrees, and then the application processor 50 sequentially stitches the initial depth image P1, the initial depth image P2, the initial depth image P3, and the initial depth image P4 into a 360-degree panoramic depth image P1234 of one frame according to the field angles of view of the four light receivers 24, so as to use the depth information.
In the initial depth image obtained by the microprocessor 40 according to the laser pulse emitted by the light emitter 22 and the laser pulse received by the light receiver 24, the depth information of each pixel is the distance between the subject in the corresponding direction and the light receiver 24 in the direction. That is, the depth information of each pixel in the initial depth image P1 is the distance between the subject in the first orientation and the light receiver 24 a; the depth information of each pixel in the initial depth image P2 is the distance between the subject at the second orientation and the light receiver 24 b; the depth information of each pixel in the initial depth image P3 is the distance between the subject in the third orientation and the light receiver 24 c; the depth information of each pixel in the initial depth image P4 is the distance between the subject in the fourth orientation and the light receiver 24 d. In the process of splicing a plurality of initial depth images of a plurality of azimuths into a 360-degree panoramic depth image of one frame, firstly, the depth information of each pixel in each initial depth image is converted into unified depth information, and the unified depth information represents the distance between each object to be shot and a certain reference position in each azimuth. After the depth information is converted into the unified depth information, the application processor 50 is convenient to perform the splicing of the initial depth image according to the unified depth information.
Specifically, one reference coordinate system is selected, and the reference coordinate system may be an image coordinate system of the light receiver 24 in a certain direction as the reference coordinate system, or another coordinate system may be selected as the reference coordinate system. Taking FIG. 10 as an example, take xo-yo-zoThe coordinate system is a reference coordinate system. Coordinate system x shown in fig. 10a-ya-zaIs the image coordinate system of the light receiver 24a, coordinate system xb-yb-zbIs the image coordinate system of the light receiver 24b, coordinate system xc-yc-zcIs the image coordinate system of the light receiver 24c, coordinate system xd-yd-zdIs the image coordinate system of the light receiver 24 d. The application processor 50 is based on a coordinate system xa-ya-zaWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix between convert the depth information of each pixel in the initial depth image P1 into unified depth information according to the coordinate system xb-yb-zbWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix between the depth information of each pixel in the initial depth image P2 are converted into the unified depth informationTransforming depth information according to a coordinate system xc-yc-zcWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix in between convert the depth information of each pixel in the initial depth image P3 into unified depth information; according to a coordinate system xd-yd-zdWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix in between convert the depth information of each pixel in the initial depth image P4 into unified depth information.
After the depth information conversion is completed, a plurality of initial depth images are positioned in a unified reference coordinate system, and one pixel point of each initial depth image corresponds to one coordinate (x)o,yo,zo) Then the stitching of the initial depth images can be done by coordinate matching. For example, a certain pixel point P in the initial depth image P1aHas the coordinates of (x)o1,yo1,zo1) In the initial depth image P2, a certain pixel point PbAlso has the coordinate of (x)o1,yo1,zo1) Due to PaAnd PbIf the coordinate values are the same in the current reference coordinate system, the pixel point P is indicatedaAnd pixel point PbWhen the initial depth image P1 and the initial depth image P2 are spliced at the same point, a pixel point P isaNeeds and pixel point PbAnd (4) overlapping. Thus, the application processor 50 can perform stitching of a plurality of initial depth images through the matching relationship of the coordinates, and obtain a 360-degree panoramic depth image.
It should be noted that, performing the stitching of the initial depth image based on the matching relationship of the coordinates requires that the resolution of the initial depth image needs to be greater than a preset resolution. It can be appreciated that if the resolution of the initial depth image is low, the coordinate (x) iso,yo,zo) Will also be relatively low, in which case matching directly from the coordinates may occur PaPoint sum PbThe points do not actually coincide but differ by an offset, and the value of the offset exceeds the error limit. If the resolution of the image is high, the coordinates(xo,yo,zo) Will be relatively high, in which case the matching is done directly from the coordinates, even if P isaPoint sum PbThe points are not actually overlapped and differ by an offset, but the value of the offset is smaller than an error limit value, namely, the offset is within an error allowable range, and the splicing of the initial depth image cannot be greatly influenced.
It is to be understood that the following embodiments may adopt the above-mentioned manner to splice or synthesize two or more initial depth images, and are not described one by one.
The application processor 50 may further combine the plurality of initial depth images and the corresponding plurality of visible light images into a three-dimensional scene image for display for viewing by a user. For example, the plurality of visible light images are a visible light image V1, a visible light image V2, a visible light image V3, and a visible light image V4, respectively. The processor 50 is used to synthesize the initial depth image P1 and the visible light image V1, synthesize the initial depth image P2 and the visible light image V2, synthesize the initial depth image P3 and the visible light image V3, synthesize the initial depth image P4 and the visible light image V4, and then splice the four synthesized images to obtain a 360-degree three-dimensional scene image of one frame. Or, the application processor 50 firstly splices the initial depth image P1, the initial depth image P2, the initial depth image P3 and the initial depth image P4 to obtain a 360-degree panoramic depth image of one frame, and splices the visible light image V1, the visible light image V2, the visible light image V3 and the visible light image V4 to obtain a 360-degree panoramic visible light image of one frame; and then the panoramic depth image and the panoramic visible light image are synthesized into a 360-degree three-dimensional scene image.
Referring to fig. 11, in an embodiment, the application processor 50 is configured to identify the object to be shot according to a plurality of initial depth images acquired by a plurality of microprocessors 40 and a plurality of scene images acquired by a plurality of camera assemblies 30, or identify the object to be shot according to a plurality of initial depth images acquired by one microprocessor 40 and a plurality of scene images acquired by a plurality of camera assemblies 30 in sequence.
Specifically, when the scene image is an infrared light image, the plurality of infrared light images may be an infrared light image I1, an infrared light image I2, an infrared light image I3, and an infrared light image I4, respectively. The application processor 50 identifies a first-position subject from the initial depth image P1 and the infrared light image I1, a second-position subject from the initial depth image P2 and the infrared light image I2, a third-position subject from the initial depth image P3 and the infrared light image I3, and a fourth-position subject from the initial depth image P4 and the infrared light image I4, respectively. When the scene image is a visible light image, the plurality of visible light images are a visible light image V1, a visible light image V2, a visible light image V3, and a visible light image V4, respectively. The application processor 50 identifies a first bearing of a photographic target from the initial depth image P1 and the visible light image V1, a second bearing of a photographic target from the initial depth image P2 and the visible light image V2, a third bearing of a photographic target from the initial depth image P3 and the visible light image V3, and a fourth bearing of a photographic target from the initial depth image P4 and the visible light image V4, respectively.
When the photographic subject is identified as face recognition, the application processor 50 performs face recognition with higher accuracy using the infrared light image as the scene image. The process of face recognition by the application processor 50 from the initial depth image and the infrared light image may be as follows:
firstly, face detection is carried out according to the infrared light image to determine a target face area. Because the infrared light image comprises the detail information of the scene, after the infrared light image is acquired, the human face detection can be carried out according to the infrared light image, so that whether the infrared light image contains the human face or not can be detected. And if the infrared light image contains the human face, extracting a target human face area where the human face is located in the infrared light image.
Then, the living body detection processing is performed on the target face region according to the initial depth image. Because each initial depth image corresponds to the infrared light image, and the initial depth image includes the depth information of the corresponding infrared light image, the depth information corresponding to the target face area can be acquired according to the initial depth image. Further, since the living body face is stereoscopic and the face displayed, for example, on a picture, a screen, or the like, is planar, it is possible to determine whether the target face region is stereoscopic or planar according to the acquired depth information of the target face region, thereby performing living body detection on the target face region.
And if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area in the infrared light image according to the target face attribute parameters to obtain a face matching result. The target face attribute parameters refer to parameters capable of representing attributes of a target face, and the target face can be identified and matched according to the target face attribute parameters. The target face attribute parameters include, but are not limited to, face deflection angles, face brightness parameters, facial features parameters, skin quality parameters, geometric feature parameters, and the like. The electronic apparatus 100 may previously store the face attribute parameters for matching. After the target face attribute parameters are acquired, the target face attribute parameters can be compared with the face attribute parameters stored in advance. And if the target face attribute parameters are matched with the pre-stored face attribute parameters, the face recognition is passed.
It should be noted that the specific process of the application processor 50 performing face recognition according to the initial depth image and the infrared light image is not limited to this, for example, the application processor 50 may also assist in detecting a face contour according to the initial depth image to improve face recognition accuracy, and the like. The process of the application processor 50 performing face recognition based on the initial depth image and the visible light image is similar to the process of the application processor 50 performing face recognition based on the initial depth image and the infrared light image, and will not be further described herein.
Referring to fig. 11 and 12, the application processor 50 is further configured to combine at least two initial depth images acquired by the at least two microprocessors 40 into a frame of combined depth image according to the field angle of the optical receiver 24 when the recognition of the target fails according to the multiple initial depth images and the multiple scene images, combine at least two scene images acquired by the at least two camera assemblies 30 into a frame of combined scene image, and recognize the target according to the combined depth image and the combined scene image; alternatively, the application processor 50 is further configured to combine at least two initial depth images sequentially acquired by one microprocessor 40 into one merged depth image according to the field angle of the optical receiver 24 when the target fails to be identified according to the plurality of initial depth images and the plurality of scene images, combine at least two scene images acquired by at least two camera assemblies 30 into one merged scene image, and identify the target according to the merged depth image and the merged scene image.
Specifically, in the embodiment shown in fig. 11 and 12, since the field angle of the light receiver 24 of each time-of-flight component 20 is limited, and there may be a case where half of the human face is located in the initial depth image P2 and the other half is located in the initial depth image P3, the application processor 50 synthesizes the initial depth image P2 and the initial depth image P3 into one frame of merged depth image P23, and correspondingly synthesizes the infrared light image I2 and the infrared light image I3 (or the visible light image V2 and the visible light image V3) into one frame of merged scene image I23 (or V23), so as to re-identify the object to be photographed from the merged depth image P23 and the merged scene image I23 (or V23).
It is understood that when the subject is distributed in more initial depth images at the same time, the application processor 50 may synthesize more initial depth images (corresponding to different orientations) into one frame of merged depth image, and correspondingly synthesize more infrared light images (corresponding to different orientations) or visible light images (corresponding to different orientations) into one frame of merged scene image, so as to re-identify the subject.
Referring to fig. 13 and 14, in one embodiment, the application processor 50 is configured to determine a distance variation between the subject and the electronic device 100 according to a plurality of initial depth images (corresponding to each time-of-flight component 20).
Specifically, each optical transmitter 22 may emit laser light multiple times, and correspondingly, each optical receiver 24 may be exposed multiple times. When the number of the microprocessors 40 is multiple, each microprocessor 40 processes the laser pulses which are transmitted by the corresponding light emitter 22 for multiple times and the laser pulses which are received by the light receiver 24 for multiple times to obtain multiple initial depth images; when the number of the microprocessors 40 is one, one microprocessor 40 sequentially processes the laser pulses transmitted multiple times by the plurality of light emitters 22 and the laser pulses received multiple times by the plurality of light receivers 24 to obtain a plurality of initial depth images.
For example, at a first time T1, the light emitter 22a emits laser pulses, the light receiver 24a receives the laser pulses, at a second time T2, the light emitter 22b emits laser pulses, the light receiver 24b receives the laser pulses, at a third time T3, the light emitter 22c emits laser pulses, the light receiver 24c receives the laser pulses, at a fourth time T4, the light emitter 22d emits laser pulses, and the light receiver 24d receives the laser pulses (the first time T1, the second time T2, the first time T3, and the fourth time T4 are located in the same alternating period T), the plurality of microprocessors 40 correspondingly obtain the initial depth image P11, the initial depth image P21, the initial depth image P31, and the initial depth image P41, or one microprocessor 40 sequentially obtains the initial depth image P11, the initial depth image P21, the initial depth image P31, and the initial depth image P41; the light emitter 22a emits a laser pulse at a fifth time T5, the light receiver 24a receives the laser pulse, the light emitter 22b emits a laser pulse at a sixth time T6, the light receiver 24b receives the laser pulse, the light emitter 22c emits a laser pulse at a seventh time T7, the light receiver 24c receives the laser pulse, the light emitter 22d emits a laser pulse at an eighth time T8, and the light receiver 24d receives the laser pulse (the fifth time T5, the sixth time T6, the seventh time T7, and the eighth time T8 are located in the same alternating period T), the plurality of microprocessors 40 correspondingly obtain an initial depth image P12, an initial depth image P22, an initial depth image P32, and an initial depth image P42, or one microprocessor 40 sequentially obtains the initial depth image P12, the initial depth image P22, the initial depth image P32, and the initial depth image P42. Then, the application processor 50 determines a distance change between the subject at the first orientation and the electronic device 100 from the initial depth image P11 and the initial depth image P12, respectively; judging the distance change between the shot target at the second position and the electronic equipment 100 according to the initial depth image P21 and the initial depth image P22; judging the distance change between the shot target in the third direction and the electronic equipment 100 according to the initial depth image P31 and the initial depth image P32; and judging the distance change between the shot target at the fourth position and the electronic equipment 100 according to the initial depth image P41 and the initial depth image P42.
It is understood that, since the depth information of the subject is included in the initial depth image, the application processor 50 may determine a distance change between the subject corresponding to the orientation and the electronic apparatus 100 from a depth information change at a plurality of consecutive times.
Referring to fig. 15, the application processor 50 is further configured to combine at least two initial depth images acquired by at least two microprocessors 40 into one merged depth image according to the field angle of the optical receiver 24 when determining that the distance variation fails according to the multiple initial depth images, where the application processor 50 continuously performs the combining step to obtain multiple frames of continuous merged depth images, and determines the distance variation according to the multiple frames of merged depth images; or, the application processor 50 is further configured to, when it is determined that the distance change fails to be determined according to the plurality of initial depth images corresponding to each time-of-flight component 20, combine at least two initial depth images corresponding to at least two time-of-flight components 20 sequentially acquired by one microprocessor 40 into one merged depth image according to the field angle of the optical receiver 24, the application processor 50 continuously performs the combining step to obtain multiple frames of continuous merged depth images, and determine the distance change according to the multiple frames of merged depth images.
Specifically, in the embodiment shown in fig. 15, since the angle of view of the optical receiver 24 of each time-of-flight component 20 is limited, there may be a case where half of a human face is located in the initial depth image P21 and the other half is located in the initial depth image P31, the application processor 50 synthesizes the initial depth image P21 at the second time t2 and the initial depth image P31 at the third time t3 into one combined depth image P231, correspondingly synthesizes the initial depth image P22 at the sixth time t6 and the initial depth image P32 at the seventh time t7 into one combined depth image P232, and then re-judges the distance change according to the two combined depth images P231 and P232.
It is to be understood that, when the subject is distributed in more initial depth images at the same time, the application processor 50 may synthesize the more initial depth images (corresponding to different orientations) into one frame of a merged depth image, and continuously perform the synthesizing step for a plurality of time instants.
Referring to fig. 14, when it is determined that the distance is decreased according to the plurality of initial depth images or when it is determined that the distance is decreased according to the multi-frame merged depth image, the application processor 50 increases a frame rate of the initial depth image for determining the distance change, which is collected from the plurality of initial depth images transmitted from the microprocessor 40. Specifically, when the number of the microprocessors 40 is multiple, the application processor 50 may increase the frame rate of the initial depth images collected from the multiple initial depth images transmitted by at least one of the microprocessors 40 to determine the distance change; when the number of the microprocessors 40 is one, the application processor 50 increases the frame rate of the initial depth images collected from the initial depth images transmitted by the microprocessors 40 to determine the distance change.
It is understood that when the distance between the subject and the electronic apparatus 100 decreases, the electronic apparatus 100 cannot predict whether the distance decreases, and therefore, the application processor 50 may increase the frame rate of the initial depth image collected from the plurality of initial depth images transmitted from the microprocessor 40 to determine the distance change, so as to more closely focus on the distance change. Specifically, when determining that the distance corresponding to a certain orientation decreases, the application processor 50 may increase the frame rate of the initial depth image acquired from the plurality of initial depth images transmitted by the microprocessor 40 for determining the distance change in the orientation.
For example, at a first time t1, a second time t2, a third time t3 and a fourth time t4, the plurality of microprocessors 40 respectively obtain or one microprocessor 40 obtains in sequence an initial depth image P11, an initial depth image P21, an initial depth image P31 and an initial depth image P41; at a fifth time t5, a sixth time t6, a seventh time t7 and an eighth time t8, the plurality of microprocessors 40 respectively obtain or one microprocessor 40 sequentially obtains an initial depth image P12, an initial depth image P22, an initial depth image P32 and an initial depth image P42; at a ninth time t9, a tenth time t10, an eleventh time t11 and a twelfth time t12, the plurality of microprocessors 40 respectively obtain or one microprocessor 40 sequentially obtains an initial depth image P13, an initial depth image P23, an initial depth image P33 and an initial depth image P43; at a thirteenth time t13, a fourteenth time t14, a fifteenth time t15 and a sixteenth time t16, the plurality of microprocessors 40 respectively obtain or one microprocessor 40 sequentially obtains an initial depth image P14, an initial depth image P24, an initial depth image P34 and an initial depth image P44. The first time T1, the second time T2, the third time T3 and the fourth time T4 are located in the same alternating period T, the fifth time T5, the sixth time T6, the seventh time T7 and the eighth time T8 are located in the same alternating period T, the ninth time T9, the tenth time T10, the eleventh time T11 and the twelfth time T12 are located in the same alternating period T, and the thirteenth time T13, the fourteenth time T14, the fifteenth time T15 and the sixteenth time T16 are located in the same alternating period T.
Under normal circumstances, the application processor 50 selects an initial depth image P11 and an initial depth image P14 to judge the distance change between the subject at the first orientation and the electronic device 100; selecting an initial depth image P21 and an initial depth image P24 to judge the distance change between the shot target in the second direction and the electronic device 100; selecting an initial depth image P31 and an initial depth image P34 to judge the distance change between the shot target in the third direction and the electronic device 100; the initial depth image P41 and the initial depth image P44 are selected to judge the distance change between the subject in the fourth direction and the electronic device 100. The frame rate of the application processor 50 for acquiring the initial depth image in each direction is one frame acquired every two frames, that is, one frame is selected every three frames.
When the distance corresponding to the first direction is determined to decrease according to the initial depth image P11 and the initial depth image P14, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine the distance between the subject in the first direction and the electronic device 100. The frame rate at which the application processor 50 acquires the initial depth image of the first orientation is changed to acquire one frame every other frame, i.e., one frame is selected every two frames. The frame rates of other orientations are kept unchanged, that is, the application processor 50 still selects the initial depth image P21 and the initial depth image P24 to judge the distance change; selecting an initial depth image P31 and an initial depth image P34 to judge distance change; and selecting an initial depth image P41 and an initial depth image P44 to judge the distance change.
When the distance corresponding to the first position is determined to decrease according to the initial depth image P11 and the initial depth image P14, and the distance corresponding to the second position is determined to decrease according to the initial depth image P21 and the initial depth image P24, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine the distance change between the object in the first position and the electronic device 100, selects the initial depth image P21 and the initial depth image P23 to determine the distance change between the object in the second position and the electronic device 100, and the frame rate of acquiring the initial depth images in the first position and the second position by the application processor 50 is changed to one frame per frame interval, that is, one frame per two frames is selected. The frame rates of other orientations are kept unchanged, that is, the application processor 50 still selects the initial depth image P31 and the initial depth image P34 to determine the distance change between the subject in the third orientation and the electronic device 100; the initial depth image P41 and the initial depth image P44 are selected to judge the distance change between the subject in the fourth direction and the electronic device 100.
Of course, the application processor 50 may increase the frame rate of the initial depth image collected from the plurality of initial depth images of each azimuth transmitted from the microprocessor 40 to determine the distance change when determining that the distance corresponding to any azimuth decreases. Namely: when the distance between the object in the first position and the electronic device 100 is determined to be decreased according to the initial depth image P11 and the initial depth image P14, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine a distance change between the object in the first position and the electronic device 100, selects the initial depth image P21 and the initial depth image P23 to determine a distance change between the object in the second position and the electronic device 100, selects the initial depth image P31 and the initial depth image P33 to determine a distance change between the object in the third position and the electronic device 100, and selects the initial depth image P41 and the initial depth image P43 to determine a distance change between the object in the fourth position and the electronic device 100.
The application processor 50 may also determine the change in distance as the distance decreases, in conjunction with the visible light image or the infrared light image. Specifically, the application processor 50 identifies the photographic subject from the visible light image or the infrared light image, and then determines the distance change from the initial depth image at a plurality of times, thereby controlling the electronic apparatus 100 to perform different operations with respect to different photographic subjects and different distances. Alternatively, the microprocessor 40 controls the frequency of the laser emitted by the corresponding light emitter 22 and the exposure of the light receiver 24 to be increased when the distance is decreased.
It should be noted that the electronic device 100 of the present embodiment may also be used as an external terminal, and may be fixedly mounted or detachably mounted on a portable electronic device such as a mobile phone, a tablet computer, a notebook computer, etc., or may be fixedly mounted on a movable object such as a vehicle body (as shown in fig. 12 and 13), an unmanned aerial vehicle body, a robot body, or a ship body. When the electronic device 100 is used specifically, a frame of panoramic depth image is synthesized according to the plurality of initial depth images as described above, and the panoramic depth image may be used for three-dimensional modeling, instant positioning and mapping (SLAM), and augmented reality display. When the electronic device 100 recognizes a subject as described above, the method may be applied to face recognition unlocking and payment of a portable electronic device, or applied to obstacle avoidance of a robot, a vehicle, an unmanned aerial vehicle, a ship, or the like. When the electronic apparatus 100 determines that the distance between the subject and the electronic apparatus 100 changes as described above, the present invention can be applied to automatic travel, object tracking, and the like of robots, vehicles, unmanned planes, ships, and the like.
Referring to fig. 2 and 16, the present invention further provides a mobile platform 300. The mobile platform 300 includes a body 10 and a plurality of time-of-flight assemblies 20 disposed on the body 10. The plurality of time of flight assemblies 20 are respectively located at a plurality of different orientations of the body 10. Each time of flight assembly 20 includes an optical transmitter 22 and an optical receiver 24. The light emitter 22 is used for emitting laser pulses to the outside of the body 10, and the light receiver 24 is used for receiving the laser pulses emitted by the corresponding light emitter 22 reflected by the object to be shot. The light emitters 22 in adjacent orientation time-of-flight assemblies 20 time-share the laser pulses and the light receivers 24 in adjacent orientation time-of-flight assemblies 20 time-share the exposures to obtain the panoramic depth image.
Specifically, the body 10 may be a vehicle body, an unmanned aerial vehicle fuselage, a robot body, or a ship body.
Referring to fig. 16, when the body 10 is a vehicle body, the number of the plurality of time-of-flight assemblies 20 is four, and the four time-of-flight assemblies 20 are respectively mounted on four sides of the vehicle body, for example, a head, a tail, a left side of the vehicle body, and a right side of the vehicle body. The vehicle body can drive the plurality of flight time assemblies 20 to move on a road, and a 360-degree panoramic depth image on a traveling route is constructed to be used as a reference map and the like; or acquiring a plurality of initial depth images in different directions to identify the photographed target, and determining the distance change between the photographed target and the mobile platform 300, so as to control the vehicle body to accelerate, decelerate, stop, detour, and the like, thereby implementing unmanned obstacle avoidance. In this way, different operations are performed according to different photographic subjects when the distance decreases, and the vehicle can be made more intelligent.
Referring to fig. 17, when the main body 10 is an unmanned aerial vehicle body, the number of the plurality of time of flight assemblies 20 is four, and the four time of flight assemblies 20 are respectively installed at the front, rear, left, and right sides of the unmanned aerial vehicle body, or at the front, rear, left, and right sides of a cradle head carried on the unmanned aerial vehicle body. The unmanned aerial vehicle fuselage can drive a plurality of flight time subassemblies 20 and fly in the air to take photo by plane, patrol and examine etc. unmanned aerial vehicle can return the panorama depth image who obtains and give ground control end, also can directly carry out SLAM. A plurality of time of flight components 20 can realize that unmanned aerial vehicle accelerates, decelerates, stops, keeps away barrier, object tracking.
Referring to fig. 18, when the main body 10 is a robot main body, such as a sweeping robot, the number of the plurality of time-of-flight assemblies 20 is four, and the four time-of-flight assemblies 20 are respectively installed at the front, rear, left, and right sides of the robot main body. The robot body can drive the plurality of flight time assemblies 20 to move at home, and initial depth images in a plurality of different directions are acquired so as to identify a shot target and judge the distance change between the shot target and the mobile platform 300, so that the robot body is controlled to move, and the robot is enabled to clear away garbage, avoid obstacles and the like.
Referring to fig. 19, when the body 10 is a ship body, the number of the plurality of time-of-flight assemblies 20 is four, and the four time-of-flight assemblies 20 are respectively installed at the front, rear, left, and right sides of the ship body. The ship body can drive the flight time assembly 20 to move, and initial depth images in a plurality of different directions are acquired, so that a shot target is accurately identified in a severe environment (for example, a foggy environment), the distance change between the shot target and the mobile platform 300 is judged, and the safety of marine navigation is improved.
The mobile platform 300 according to the embodiment of the present application is a platform capable of moving independently, and the plurality of time-of-flight components 20 are mounted on the body 10 of the mobile platform 300 to obtain a panoramic depth image. However, the electronic device 100 of the embodiment of the present application is generally not independently movable, and the electronic device 100 may be further mounted on a movable apparatus such as the mobile platform 300, thereby assisting the apparatus in acquiring the panoramic depth image.
It should be noted that the above explanations of the body 10, the time-of-flight assembly 20, the camera assembly 30, the microprocessor 40, and the application processor 50 of the electronic device 100 are also applicable to the mobile platform 300 according to the embodiment of the present application, and the descriptions thereof are not repeated here.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.

Claims (19)

1. An electronic device, characterized in that the electronic device comprises:
a body;
a plurality of time-of-flight components disposed on the body, the plurality of time-of-flight components being respectively located at a plurality of different orientations of the body, each time-of-flight component including a light emitter for emitting a laser pulse out of the body and a light receiver for receiving the laser pulse emitted by the corresponding light emitter reflected by a subject;
the light emitters in the time-of-flight components in adjacent orientations emit the laser pulses in a time-shared manner and the light receivers in the time-of-flight components in adjacent orientations are exposed in a time-shared manner to obtain a plurality of initial depth images; and
the application processor is used for converting the depth information of each pixel in each initial depth image into unified depth information under a reference coordinate system according to a rotation matrix and a translation matrix between each image coordinate system and the reference coordinate system, wherein any pixel point in each initial depth image corresponds to a coordinate value, and the application processor is used for splicing the converted initial depth images according to the unified depth information and through coordinate matching to obtain a panoramic depth image; when a plurality of initial depth images are spliced, if the pixel points with the same coordinate value exist and the resolution of the initial depth images corresponding to the pixel points is larger than the preset resolution, the pixel points with the same coordinate value are overlapped.
2. The electronic device of claim 1, wherein the time-of-flight components comprise four, and the field angle of each of the phototransmitters and each of the photoreceivers is any value from 80 degrees to 100 degrees.
3. The electronic device of claim 1, wherein said light emitters in a plurality of said time-of-flight components emit said laser pulses in a time-shared manner, said light receivers in a plurality of said time-of-flight components are exposed in a time-shared manner, and said light emitters in other of said time-of-flight components are turned off while said light receivers in any of said time-of-flight components are exposed.
4. The electronic device of claim 3, wherein the plurality of phototransmitters in the plurality of time-of-flight assemblies sequentially and uninterruptedly transmit the laser pulses, and wherein the exposure time of the photoreceiver in each of the time-of-flight assemblies is within the transmission time range of the corresponding phototransmitter.
5. The electronic device of claim 3, wherein the plurality of optical transmitters in the plurality of time-of-flight components transmit the laser pulses sequentially and at predetermined intervals, and the plurality of optical receivers in the plurality of time-of-flight components are exposed sequentially and at predetermined intervals.
6. The electronic device of claim 3, wherein the plurality of optical transmitters in the plurality of time-of-flight components sequentially transmit the laser pulses at predetermined intervals and sequentially expose the plurality of optical receivers in the plurality of time-of-flight components continuously without interruption.
7. The electronic device according to claim 1, further comprising a plurality of microprocessors, each microprocessor corresponding to one of the time-of-flight components, the plurality of microprocessors being connected to the application processor, each microprocessor being configured to obtain an initial depth image from the laser pulses emitted by the light emitter and the laser pulses received by the light receiver of the corresponding time-of-flight component and transmit the initial depth image to the application processor; the application processor is used for synthesizing the plurality of initial depth images acquired by the microprocessors into one frame of the panoramic depth image according to the field angle of the optical receiver.
8. The electronic device of claim 1, further comprising a microprocessor, wherein the microprocessor is connected to the application processor, and the microprocessor is configured to obtain a plurality of initial depth images in sequence according to the laser pulses emitted by the light emitters of the plurality of time-of-flight components and the laser pulses received by the light receivers and transmit the initial depth images to the application processor; the application processor is used for synthesizing the plurality of initial depth images acquired by the microprocessor into one frame of the panoramic depth image according to the field angle of the optical receiver.
9. The electronic device according to claim 1, further comprising a plurality of microprocessors, each microprocessor corresponding to one of the time-of-flight components, the plurality of microprocessors being connected to the application processor, each microprocessor being configured to obtain an initial depth image from the laser pulses emitted by the light emitter and the laser pulses received by the light receiver of the corresponding time-of-flight component and transmit the initial depth image to the application processor;
the electronic equipment further comprises a plurality of camera assemblies arranged on the body, each camera assembly corresponds to one time-of-flight assembly, the camera assemblies are all connected with the application processor, and each camera assembly is used for collecting a scene image of the shot target and outputting the scene image to the application processor;
the application processor is used for identifying the shot target according to a plurality of initial depth images acquired by the microprocessors and a plurality of scene images acquired by the camera assemblies.
10. The electronic device of claim 9, wherein the application processor is further configured to combine at least two of the initial depth images acquired by at least two of the microprocessors into a combined depth image according to a field angle of the optical receiver when the recognition of the target object according to the plurality of initial depth images and the plurality of scene images fails, combine at least two of the scene images acquired by at least two of the camera assemblies into a combined scene image according to a frame, and recognize the target object according to the combined depth image and the combined scene image.
11. The electronic device of claim 1, further comprising a microprocessor, wherein the microprocessor is connected to the application processor, and the microprocessor is configured to obtain a plurality of initial depth images in sequence according to the laser pulses emitted by the light emitters of the plurality of time-of-flight components and the laser pulses received by the light receivers and transmit the initial depth images to the application processor;
the electronic equipment further comprises a plurality of camera assemblies arranged on the body, each camera assembly corresponds to one time-of-flight assembly, the camera assemblies are all connected with the application processor, and each camera assembly is used for collecting a scene image of the shot target and outputting the scene image to the application processor;
the application processor is used for identifying the shot target according to the plurality of initial depth images acquired by the microprocessor and the plurality of scene images acquired by the plurality of camera assemblies.
12. The electronic device of claim 11, wherein the application processor is further configured to combine at least two of the initial depth images acquired by the microprocessor into a combined depth image according to a field angle of the optical receiver when the recognition of the target object according to the plurality of initial depth images and the plurality of scene images fails, combine at least two of the scene images acquired by at least two of the camera assemblies into a combined scene image, and recognize the target object according to the combined depth image and the combined scene image.
13. The electronic device according to claim 1, further comprising a plurality of microprocessors, each microprocessor corresponding to one of the time-of-flight components, each microprocessor being connected to the application processor, each microprocessor being configured to obtain a plurality of initial depth images from the laser pulses transmitted by the light transmitter of the corresponding time-of-flight component for a plurality of times and from the laser pulses received by the light receiver for a plurality of times, and to transmit the initial depth images to the application processor; the application processor is used for judging the distance change between the shot target and the electronic equipment according to the plurality of initial depth images.
14. The electronic device according to claim 13, wherein the application processor is further configured to combine at least two of the initial depth images obtained by at least two of the microprocessors into one merged depth image according to a field angle of the optical receiver when determining that the distance change fails according to a plurality of the initial depth images, and the application processor continuously performs the combining step to obtain a plurality of frames of consecutive merged depth images and determines the distance change according to the plurality of frames of the merged depth images.
15. The electronic device of claim 1, further comprising a microprocessor, connected to the application processor, for obtaining a plurality of initial depth images sequentially according to the laser pulses transmitted by the optical transmitter and received by the optical receiver of the plurality of time-of-flight components for a plurality of times, and transmitting the plurality of initial depth images to the application processor; the application processor is used for judging the distance change between the shot target and the electronic equipment according to a plurality of initial depth images corresponding to each time-of-flight component.
16. The electronic device according to claim 15, wherein the application processor is further configured to combine at least two of the initial depth images corresponding to at least two of the time-of-flight components acquired by the microprocessor into one merged depth image according to the field angle of the optical receiver when determining that the distance change fails according to the plurality of initial depth images corresponding to each of the time-of-flight components, and the application processor continuously performs the combining step to obtain a plurality of consecutive merged depth images and determines the distance change according to the plurality of merged depth images.
17. The electronic device according to any one of claims 13 to 16, wherein the application processor is further configured to increase a frame rate of the initial depth image collected from the plurality of initial depth images transmitted from the microprocessor to determine the distance change when the distance change is determined to be a distance decrease.
18. A mobile platform, comprising:
a body;
a plurality of time-of-flight components disposed on the body, the plurality of time-of-flight components being respectively located at a plurality of different orientations of the body, each time-of-flight component including a light emitter for emitting a laser pulse out of the body and a light receiver for receiving the laser pulse emitted by the corresponding light emitter reflected by a subject;
the light emitters in the time-of-flight components in adjacent orientations emit the laser pulses in a time-shared manner and the light receivers in the time-of-flight components in adjacent orientations are exposed in a time-shared manner to obtain a plurality of initial depth images; and
the application processor is used for converting the depth information of each pixel in each initial depth image into unified depth information under a reference coordinate system according to a rotation matrix and a translation matrix between each image coordinate system and the reference coordinate system, wherein any pixel point in each initial depth image corresponds to a coordinate value, and the application processor is used for splicing the converted initial depth images according to the unified depth information and through coordinate matching to obtain a panoramic depth image; when a plurality of initial depth images are spliced, if the pixel points with the same coordinate value exist and the resolution of the initial depth images corresponding to the pixel points is larger than the preset resolution, the pixel points with the same coordinate value are overlapped.
19. The mobile platform of claim 18, wherein the body is a vehicle body, an unmanned aerial vehicle fuselage, a robot body, or a ship body.
CN201910007545.8A 2019-01-04 2019-01-04 Electronic equipment and mobile platform Active CN109788195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910007545.8A CN109788195B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910007545.8A CN109788195B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Publications (2)

Publication Number Publication Date
CN109788195A CN109788195A (en) 2019-05-21
CN109788195B true CN109788195B (en) 2021-04-16

Family

ID=66500037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910007545.8A Active CN109788195B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Country Status (1)

Country Link
CN (1) CN109788195B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111246073B (en) * 2020-03-23 2022-03-25 维沃移动通信有限公司 Imaging device, method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106461783A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Automatic multiple depth cameras synchronization using time sharing
CN107263480A (en) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 A kind of robot manipulation's method and robot
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN108471487A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 Generate the image device and associated picture device of panoramic range image

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101494736B (en) * 2009-02-10 2011-05-04 杨立群 Filming system
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102129550B (en) * 2011-02-17 2013-04-17 华南理工大学 Scene perception method
US9653874B1 (en) * 2011-04-14 2017-05-16 William J. Asprey Trichel pulse energy devices
US10848731B2 (en) * 2012-02-24 2020-11-24 Matterport, Inc. Capturing and aligning panoramic image and depth data
CN104055489B (en) * 2014-07-01 2016-05-04 李栋 A kind of blood vessel imaging device
US9525863B2 (en) * 2015-04-29 2016-12-20 Apple Inc. Time-of-flight depth mapping with flexible scan pattern
US10003740B2 (en) * 2015-07-13 2018-06-19 Futurewei Technologies, Inc. Increasing spatial resolution of panoramic video captured by a camera array
CN106371281A (en) * 2016-11-02 2017-02-01 辽宁中蓝电子科技有限公司 Multi-module 360-degree space scanning and positioning 3D camera based on structured light
CN108810500A (en) * 2017-12-22 2018-11-13 成都理想境界科技有限公司 The method of adjustment of spliced scanning imagery equipment and spliced scanning imagery equipment
CN108616703A (en) * 2018-04-23 2018-10-02 Oppo广东移动通信有限公司 Electronic device and its control method, computer equipment and readable storage medium storing program for executing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106461783A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Automatic multiple depth cameras synchronization using time sharing
CN108471487A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 Generate the image device and associated picture device of panoramic range image
CN107263480A (en) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 A kind of robot manipulation's method and robot
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation

Also Published As

Publication number Publication date
CN109788195A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
US11682127B2 (en) Image-enhanced depth sensing using machine learning
US11797083B2 (en) Head-mounted display system and 6-degree-of-freedom tracking method and apparatus thereof
CN109618108B (en) Electronic equipment and mobile platform
CN109862275A (en) Electronic equipment and mobile platform
US10643342B2 (en) Group optimization depth information method and system for constructing a 3D feature map
CN110572630A (en) Three-dimensional image shooting system, method, device, equipment and storage medium
CN109660731B (en) Electronic equipment and mobile platform
CN109618085B (en) Electronic equipment and mobile platform
CN109587304B (en) Electronic equipment and mobile platform
CN109803089B (en) Electronic equipment and mobile platform
CN109788195B (en) Electronic equipment and mobile platform
CN109688400A (en) Electronic equipment and mobile platform
CN109660733B (en) Electronic equipment and mobile platform
CN109587303B (en) Electronic equipment and mobile platform
CN110800023A (en) Image processing method and equipment, camera device and unmanned aerial vehicle
CN109729250B (en) Electronic equipment and mobile platform
US11943539B2 (en) Systems and methods for capturing and generating panoramic three-dimensional models and images
CN115407355A (en) Library position map verification method and device and terminal equipment
CN109660732B (en) Electronic equipment and mobile platform
KR102298047B1 (en) Method of recording digital contents and generating 3D images and apparatus using the same
CN114529800A (en) Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle
CN109788196B (en) Electronic equipment and mobile platform
CN116136408A (en) Indoor navigation method, server, device and terminal
CN109756660B (en) Electronic equipment and mobile platform
KR20200092197A (en) Image processing method, image processing apparatus, electronic device, computer program and computer readable recording medium for processing augmented reality image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant