CN109788196B - Electronic equipment and mobile platform - Google Patents

Electronic equipment and mobile platform Download PDF

Info

Publication number
CN109788196B
CN109788196B CN201910007854.5A CN201910007854A CN109788196B CN 109788196 B CN109788196 B CN 109788196B CN 201910007854 A CN201910007854 A CN 201910007854A CN 109788196 B CN109788196 B CN 109788196B
Authority
CN
China
Prior art keywords
structured light
initial depth
image
depth image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910007854.5A
Other languages
Chinese (zh)
Other versions
CN109788196A (en
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910007854.5A priority Critical patent/CN109788196B/en
Publication of CN109788196A publication Critical patent/CN109788196A/en
Application granted granted Critical
Publication of CN109788196B publication Critical patent/CN109788196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses an electronic device and a mobile platform. The electronic device comprises a body and a plurality of structured light assemblies arranged on the body. The plurality of structured light assemblies are respectively positioned in a plurality of different directions of the body. Each structured light assembly includes a structured light projector and a structured light camera. The structured light projector is used for projecting laser patterns to the outside of the body, and the structured light camera is used for collecting the laser patterns projected by the corresponding structured light projector reflected by the shot target. The structured light projectors in the plurality of structured light assemblies project laser light simultaneously, and the structured light cameras in the plurality of structured light assemblies are exposed simultaneously to acquire a panoramic depth image. In the electronic equipment and the mobile platform of this application embodiment, be located a plurality of structured light projectors in a plurality of different position of body and throw laser simultaneously, a plurality of structured light cameras are exposed simultaneously to acquire panoramic depth image, can once only acquire comparatively comprehensive depth information.

Description

Electronic equipment and mobile platform
Technical Field
The present application relates to the field of image acquisition technologies, and more particularly, to an electronic device and a mobile platform.
Background
In order to diversify the functions of the electronic device, a depth image acquiring device may be provided on the electronic device to acquire a depth image of a subject. However, the current depth image acquiring device can acquire only a depth image in one direction or one angle range, and the acquired depth information is less.
Disclosure of Invention
The embodiment of the application provides electronic equipment and a mobile platform.
The electronic equipment comprises a body and a plurality of structured light assemblies arranged on the body, wherein the structured light assemblies are respectively positioned at a plurality of different directions of the body, each structured light assembly comprises a structured light projector and a structured light camera, the structured light projector is used for projecting laser patterns to the outside of the body, and the structured light camera is used for collecting the laser patterns projected by the corresponding structured light projector reflected by a shot target; the structured light projectors in the plurality of structured light assemblies project laser light simultaneously, and the structured light cameras in the plurality of structured light assemblies are exposed simultaneously to acquire a panoramic depth image.
The mobile platform comprises a body and a plurality of structured light assemblies arranged on the body, wherein the structured light assemblies are respectively positioned at a plurality of different directions of the body, each structured light assembly comprises a structured light projector and a structured light camera, the structured light projector is used for projecting laser patterns to the outside of the body, and the structured light camera is used for collecting the laser patterns projected by the corresponding structured light projector reflected by a photographed target; the structured light projectors in the plurality of structured light assemblies project laser light simultaneously, and the structured light cameras in the plurality of structured light assemblies are exposed simultaneously to acquire a panoramic depth image.
In the electronic equipment and the mobile platform of this application embodiment, be located a plurality of structured light projectors in a plurality of different position of body and throw laser simultaneously, a plurality of structured light cameras are exposed simultaneously to acquire panoramic depth image, can once only acquire comparatively comprehensive depth information.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic structural diagram of an electronic device according to some embodiments of the present application;
FIG. 2 is a block diagram of an electronic device according to some embodiments of the present application;
FIG. 3 is a schematic diagram of a structured light projector of the structured light assembly of certain embodiments of the present application;
FIG. 4 is a schematic structural view of a light source of a structured light projector according to certain embodiments of the present application;
FIG. 5 is a perspective view of a diffractive optical element of a structured light projector according to certain embodiments of the present application;
FIG. 6 is a cross-sectional view of a diffractive optical element of a structured light projector according to certain embodiments of the present application;
FIG. 7 is a schematic plan view of the diffractive optical element of the structured light projector of certain embodiments of the present application;
FIG. 8 is a schematic diagram of an application scenario of an electronic device according to some embodiments of the present application;
FIG. 9 is a schematic diagram of a coordinate system for initial depth image stitching according to some embodiments of the present application;
fig. 10 to 14 are schematic views of application scenarios of an electronic device according to some embodiments of the present application;
fig. 15-18 are schematic structural views of a mobile platform according to some embodiments of the present disclosure.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. The embodiments of the present application described below in conjunction with the drawings are exemplary only and should not be construed as limiting the present application.
Referring to fig. 1 and fig. 2, an electronic device 100 according to an embodiment of the present disclosure includes a main body 10, a structured light assembly 20, a camera assembly 30, a microprocessor 40, and an application processor 50.
The body 10 includes a plurality of different orientations. For example, in fig. 1, the body 10 can have four different orientations, in the clockwise direction: the device comprises a first direction, a second direction, a third direction and a fourth direction, wherein the first direction is opposite to the third direction, and the second direction is opposite to the fourth direction. The first direction is a direction corresponding to the right side of the body 10, the second direction is a direction corresponding to the lower side of the body 10, the third direction is a direction corresponding to the left side of the body 10, and the fourth direction is a direction corresponding to the upper side of the body 10.
The structured light assembly 20 is disposed on the body 10. The number of the structured light assemblies 20 can be multiple, and the multiple structured light assemblies 20 are respectively located at multiple different orientations of the body 10. Specifically, the number of structured light assemblies 20 can be four, structured light assembly 20a, structured light assembly 20b, structured light assembly 20c, and structured light assembly 20d, respectively. The structured light assembly 20a is disposed in a first orientation, the structured light assembly 20b is disposed in a second orientation, the structured light assembly 20c is disposed in a third orientation, and the structured light assembly 20d is disposed in a fourth orientation. Of course, the number of structured light assemblies 20 may also be eight (or any other number greater than two, and particularly any number greater than four), and two (or any other number) structured light assemblies 20 may be provided for each of the first orientation, the second orientation, the third orientation, and the fourth orientation. In the embodiment of the present application, the number of the structured light assemblies 20 is four for example, it can be understood that the four structured light assemblies 20 can achieve obtaining of the panoramic depth image (the panoramic depth image means that the field angle of the panoramic depth image is greater than or equal to 180 degrees, for example, the field angle of the panoramic depth image may be 180 degrees, 240 degrees, 360 degrees, 480 degrees, 720 degrees, and the like), and is beneficial to saving the manufacturing cost of the electronic device 100, reducing the volume and power consumption of the electronic device 100, and the like. The electronic device 100 of the present embodiment may be a portable electronic device such as a mobile phone, a tablet computer, and a notebook computer, which is provided with a plurality of structured light assemblies 20, and in this case, the main body 10 may be a mobile phone body, a tablet computer body, a notebook computer body, and the like.
Each structured light assembly 20 includes a structured light projector 22 and a structured light camera 24. The structured light projector 22 is used for projecting laser patterns to the outside of the body 10, and the structured light camera 24 is used for collecting the laser patterns projected by the corresponding structured light projector 22 and reflected by a photographed object. Specifically, the structured light assembly 20a includes a structured light projector 22a and a structured light camera 24a, the structured light assembly 20b includes a structured light projector 22b and a structured light camera 24b, the structured light assembly 20c includes a structured light projector 22c and a structured light camera 24c, and the structured light assembly 20d includes a structured light projector 22d and a structured light camera 24 d. The structured light projector 22a, the structured light projector 22b, the structured light projector 22c and the structured light projector 22d are respectively used for projecting laser patterns to a first direction, a second direction, a third direction and a fourth direction outside the body 10, the structured light camera 24a, the structured light camera 24b, the structured light camera 24c and the structured light camera 24d are respectively used for collecting the laser patterns projected by the structured light projector 22a reflected by the object to be photographed in the first direction, the laser patterns projected by the structured light projector 22b reflected by the object to be photographed in the second direction, the laser patterns projected by the structured light projector 22c reflected by the object to be photographed in the third direction and the laser patterns projected by the structured light projector 22d reflected by the object to be photographed in the fourth direction, so that different areas outside the body 10 can be covered, and more comprehensive depth information can be obtained by rotating 360 degrees in the prior art, the electronic device 100 of the present embodiment can acquire relatively comprehensive depth information at a time without rotating, and is simple to execute and fast in response speed.
The structured light projectors 22 of the plurality of structured light assemblies 20 project laser light simultaneously, and the structured light cameras 24 of the plurality of structured light assemblies 20 are exposed simultaneously in correspondence therewith to acquire a panoramic depth image. Specifically, the structured light projector 22a, the structured light projector 22b, the structured light projector 22c, and the structured light projector 22d project laser light simultaneously, and the structured light camera 24a, the structured light camera 24b, the structured light camera 24c, and the structured light camera 24d are exposed simultaneously. Because the plurality of structured light projectors 22 project the laser light simultaneously and the plurality of structured light cameras 24 expose simultaneously, when the corresponding plurality of initial depth images are obtained according to the laser patterns collected by the plurality of structured light cameras 24, the plurality of initial depth images have the same timeliness and can reflect the pictures displayed in all directions outside the body 10 at the same time, namely the panoramic depth images at the same time.
The field angle of each structured light projector 22 and each structured light camera 24 is any value from 80 degrees to 100 degrees. In the following description, the angle of view of the structured light camera 24 is taken as an example, and the angle of view of the structured light projector 22 may be the same as or approximately the same as the angle of view of the corresponding structured light camera 24, and the description thereof will not be repeated.
In one embodiment, the field angles of the structured light camera 24a, the structured light camera 24b, the structured light camera 24c, and the structured light camera 24d are all 80 degrees. When the field angle of the structured light camera 24 does not exceed 80 degrees, the lens distortion is small, the quality of the obtained initial depth image is good, the quality of the obtained panoramic depth image is good, and more accurate depth information can be obtained.
In one embodiment, the sum of the field angles of the structured light camera 24a, the structured light camera 24b, the structured light camera 24c, and the structured light camera 24d is equal to 360 degrees. Specifically, the field angles of the structured light camera 24a, the structured light camera 24b, the structured light camera 24c and the structured light camera 24d may all be 90 degrees, and the field angles of the four structured light cameras 24 do not overlap with each other, so as to achieve acquiring a 360-degree or approximately 360-degree panoramic depth image. Or, the field angle of the structured light camera 24a may be 80 degrees, the field angle of the structured light camera 24b may be 100 degrees, the field angle of the structured light camera 24c may be 80 degrees, the field angle of the structured light camera 24d may be 100 degrees, and the like, and the four structured light cameras 24 achieve 360-degree or approximately 360-degree panoramic depth images through angle complementation.
In one embodiment, the sum of the field angles of the structured light cameras 24a, 24b, 24c, and 24d is greater than 360 degrees, and the field angles of at least two of the four structured light cameras 24 overlap each other. Specifically, the field angles of the structured light camera 24a, the structured light camera 24b, the structured light camera 24c, and the structured light camera 24d may all be 100 degrees, and the field angles between two of the four structured light cameras 24 overlap each other. When the panoramic depth image is obtained, the edge overlapping parts of the four initial depth images can be identified, and then the four initial depth images are spliced into the 360-degree panoramic depth image. Since the field angles of the four structured light cameras 24 are overlapped with each other, the obtained panoramic depth image can be ensured to cover the depth information of 360 degrees outside the body 10.
Of course, the specific numerical value of the angle of view of each structured light camera 24 (and each structured light projector 22) is not limited to the above example, and those skilled in the art can set the angle of view of the structured light camera 24 (and the structured light projector 22) to any value between 80 degrees and 100 degrees as needed, for example: the field angle of the structured light camera 24 is 80 degrees, 82 degrees, 84 degrees, 86 degrees, 90 degrees, 92 degrees, 94 degrees, 96 degrees, 98 degrees, 100 degrees or any value therebetween, and the field angle of the structured light projector 22 is 80 degrees, 82 degrees, 84 degrees, 86 degrees, 90 degrees, 92 degrees, 94 degrees, 96 degrees, 98 degrees, 100 degrees or any value therebetween, without limitation.
With continued reference to fig. 1 and 2, generally, the laser patterns projected by the structured-light projectors 22 in adjacent orientations are likely to interfere with each other, and the laser patterns projected by the structured-light projectors 22 in opposite orientations are not likely to interfere with each other. Therefore, to improve the accuracy of the acquired depth information, the laser patterns projected by the structured light projectors 22 of adjacent orientations may be different in order to distinguish and calculate the initial depth image. Specifically, assuming that the laser pattern projected by the structured light projector 22a in the first orientation is pattern1, the laser pattern projected by the structured light projector 22b in the second orientation is pattern2, the laser pattern projected by the structured light projector 22c in the third orientation is pattern3, and the laser pattern projected by the structured light projector 22d in the fourth orientation is pattern4, it is only necessary to satisfy that the pattern1 is different from the pattern2, the pattern1 is different from the pattern4, the pattern3 is different from the pattern2, and the pattern3 is different from the pattern 4. The pattern1 and the pattern3 may be the same or different, and the pattern2 and the pattern4 may be the same or different. Preferably, the laser pattern projected by each structured light projector 22 may be different to further improve the accuracy of the acquired depth information. That is, in the case where the patterns 1, 2, 3, and 4 are all different, the plurality of structured light assemblies 20 do not interfere with each other, and the initial depth image of each is most easily calculated.
Referring to FIG. 3, each structured light projector 22 includes a light source 222, a collimating element 224, and a Diffractive Optical element 226 (DOE). A collimating element 224 and a diffractive optical element 226 are disposed in sequence in the optical path of the light source 222. The light source 222 is used for emitting laser light (for example, infrared laser light, in this case, the structured light camera 24 is an infrared camera), the collimating element 224 is used for collimating the laser light emitted by the light source 222, and the diffractive optical element 226 is used for diffracting the laser light collimated by the collimating element 224 to form a laser light pattern for projection.
Further, referring to fig. 4, the light source 222 includes a substrate 2222 and a plurality of light emitting elements 2224 disposed on the substrate 2222. The substrate 2222 may be a semiconductor substrate, and the plurality of light-emitting elements 2224 may be directly provided over the substrate 2222; alternatively, one or more grooves may be formed in the semiconductor substrate 2222 by a wafer-level optical process, and then the light-emitting elements 2224 may be disposed in the grooves. The light Emitting element 2224 includes a point light source light Emitting device such as a Vertical-Cavity Surface-Emitting Laser (VCSEL).
The collimating element 224 comprises one or more lenses coaxially arranged in sequence in the light emitting path of the light source 222. The lens can be made of glass materials so as to solve the problem that the lens generates temperature drift when the environmental temperature changes; or the lens is made of plastic materials, so that the lens is low in cost and convenient to produce in mass. The surface type of each lens can be any one of an aspheric surface, a spherical surface, a Fresnel surface and a binary optical surface.
Referring to fig. 5, the diffractive optical element 226 includes a diffractive body 2262 and a diffractive structure 2264 formed on the diffractive body 2262. The diffractive body 2262 includes opposing diffractive entrance and exit surfaces, on which the diffractive structure 2264 may be formed; or on the diffractive exit surface; or on both the diffractive entrance surface and the diffractive exit surface.
In order to make the laser light patterns projected by the structured light projectors 22 in adjacent orientations different, or to make the laser light patterns projected by each structured light projector 22 different, the following implementations can be adopted:
one way is that: at least one of the arrangement, shape, or size of the plurality of light emitting elements 2224 may be different between different structured light projectors 22 so that the laser light patterns projected by different structured light projectors 22 are different.
Specifically, referring to fig. 4, the structured light projector 22a and the structured light projector 22b, the structured light projector 22a and the structured light projector 22d, the structured light projector 22c and the structured light projector 22b, and the structured light projector 22c and the structured light projector 22d are different in at least one of arrangement, shape, or size of the light emitting elements 2224, so that the laser patterns projected by the structured light projectors 22 in adjacent directions are different. The structured light projector 22a, the structured light projector 22b, the structured light projector 22c, the structured light projector 22d, are different in at least one of the arrangement, shape, or size of the light emitting elements 2224 such that the laser light patterns projected by each structured light projector 22 are different. For example, referring to fig. 4, fig. 4(a) shows the structure of the light source 222 of the structured light projector 22a, fig. 4(b) shows the structure of the light source 222 of the structured light projector 22b, fig. 4(c) shows the structure of the light source 222 of the structured light projector 22c, and fig. 4(d) shows the structure of the light source 222 of the structured light projector 22 d. The structured light projector 22a and the structured light projector 22b have different shapes of the light emitting elements 2224, the structured light projector 22a and the structured light projector 22c have different sizes of the light emitting elements 2224, the structured light projector 22c and the structured light projector 22b have different shapes and sizes of the light emitting elements 2224, and the structured light projector 22c and the structured light projector 22d have different arrangements, shapes and sizes of the light emitting elements 2224, so that the structured light projectors 22 in adjacent directions project different laser patterns.
Another way is: the diffractive structures 2264 are different between different structured light projectors 22, such that the laser light patterns projected by the different structured light projectors 22 are different.
Specifically, referring to fig. 5, the structured light projector 22a and the structured light projector 22b, the structured light projector 22a and the structured light projector 22d, the structured light projector 22c and the structured light projector 22b, and the structured light projector 22c and the structured light projector 22d have different diffraction structures 2264, so that the structured light projectors 22 in adjacent directions project different laser patterns. The diffractive structures 2264 of the structured light projector 22a, the structured light projector 22b, the structured light projector 22c, and the structured light projector 22d are all different such that each structured light projector 22 projects a different laser light pattern.
Referring to fig. 6 and 7, the diffractive structure 2264 may include: the diffractive structure 2264 has different steps, at least one of the depth D, the length L, the width W, and the number of steps. Of course, the diffractive structure 2264 may be different in other forms, and it is only necessary that the diffractive structure 2264 is different so that the laser light pattern projected by the structured light projector 22 is different.
It should be noted that, in addition to the above two ways, those skilled in the art can also implement the difference between the adjacent directions or the laser patterns projected by each structured light projector 22 by using other ways, for example, by adding a mask having different light transmission areas between the light source 222 and the collimating element 224, and the like, which is not limited herein.
When the laser patterns projected by each structured light projector 22 are different, the reference images corresponding to each structured light assembly 20 may be individually calibrated or collectively calibrated. In the case of independent calibration, the calibration of the structured light assembly 20a, the structured light assembly 20b, the structured light assembly 20c, and the structured light assembly 20d can be performed separately, and the structured light assembly 20a, the structured light assembly 20b, the structured light assembly 20c, and the structured light assembly 20d need not be installed on the body 10 at the same time for calibration. At this time, the laser patterns projected by the structured light projector 22a, the structured light projector 22b, the structured light projector 22c, and the structured light projector 22d do not overlap, and the reference images acquired by the structured light camera 24a, the structured light camera 24b, the structured light camera 24c, and the structured light camera 24d do not affect each other. In the case of common calibration, the structured light component 20a, the structured light component 20b, the structured light component 20c and the structured light component 20d are simultaneously mounted on the body 10 for calibration. At this time, the structured light projector 22a, the structured light projector 22b, the structured light projector 22c and the structured light projector 22d can project laser patterns simultaneously, the reference image obtained by the structured light camera 24a includes all the laser patterns projected by the structured light projector 22a and the laser patterns projected by the structured light projector 22b and the structured light projector 22d simultaneously, the reference image obtained by the structured light camera 24b includes all the laser patterns projected by the structured light projector 22b and the laser patterns projected by the structured light projector 22a and the structured light projector 22c simultaneously, the reference image obtained by the structured light camera 24c includes all the laser patterns projected by the structured light projector 22c and the laser patterns projected by the structured light projector 22b and the structured light projector 22d simultaneously, and the reference image obtained by the structured light camera 24d includes all the laser patterns projected by the structured light projector 22d and the laser patterns projected by the structured light projector 22d simultaneously The emitter 22a and the structured light projector 22 c. For the reference image obtained by the structured light camera 24a, since the laser pattern projected by the structured light projector 22a is different from the laser patterns projected by the structured light projector 22b and the structured light projector 22d, the laser pattern projected by the structured light projector 22a, the laser patterns projected by the structured light projector 22b and the structured light projector 22d can be distinguished from the reference image according to the difference of the three patterns, and the parts of the laser patterns projected by the structured light projector 22b and the structured light projector 22d are filtered out, and only the laser patterns projected by the remaining structured light projector 22a are used as the final reference image. Similarly, the reference image acquired by the structured light camera 24b, the reference image acquired by the structured light camera 24c, and the reference image acquired by the structured light camera 24d may be processed accordingly. For example, if the laser patterns have different shapes, specifically, different shapes of spots, for example, the spots of the laser patterns projected by the different structured-light projectors 22 can be distinguished according to the shapes of the spots; if the laser light patterns are different in size, the spots of the laser light patterns projected by the different structured light projectors 22 can be distinguished according to the size of the spots. In actual use, since the two structured light projectors 22 project laser patterns simultaneously, after the two structured light cameras 24 collect the laser patterns, the microprocessor 40 (as shown in fig. 2) also needs to filter out the spots in the laser patterns projected by the remaining structured light projectors 22, only keep the spots in the laser patterns projected by the corresponding structured light projectors 22, and calculate the depth information based on the remaining spots and the reference image.
When the laser patterns projected by the structured light projector 22a, the structured light projector 22b, the structured light projector 22c, and the structured light projector 22d are the same, interference may occur therebetween. At this time, the reference image corresponding to the structured light assembly 20a, the reference image corresponding to the structured light assembly 20b, the reference image corresponding to the structured light projector 22c, and the reference image corresponding to the structured light projector 22d must be calibrated together. It will be appreciated that since the angles of view of the structured light projector 22a, the structured light projector 22b, the structured light projector 22c and the structured light projector 22d may overlap, the structured light camera 24a will collect all of the laser light patterns projected by the structured light projector 22a and some of the laser light patterns projected by the structured light projector 22b and the structured light projector 22d, similarly, the structured light camera 24b will collect all of the laser light patterns projected by the structured light projector 22b and some of the laser light patterns projected by the structured light projector 22a and the structured light projector 22c, the structured light camera 24c will collect all of the laser light patterns projected by the structured light projector 22c and some of the laser light patterns projected by the structured light projector 22b and the structured light projector 22d, and the structured light camera 24d will collect all of the laser light patterns projected by the structured light projector 22d and some of the laser light patterns projected by the structured light projector 22c and the structured light projector 22 c. For the structured light assembly 20a, the spots projected by the structured light projector 22b and the structured light projector 22d, which are excessive in the laser pattern collected by the structured light camera 24a, can also be used for the calculation of the depth information, and correspondingly, the reference image of the structured light assembly 20a should also include the spots projected by the structured light projector 22b and the structured light projector 22 d; for the structured light assembly 20b, the spots projected by the structured light projector 22a and the structured light projector 22c, which are excessive in the laser pattern collected by the structured light camera 24b, can also be used for the calculation of the depth information, and correspondingly, the reference image of the structured light assembly 20b should also include the spots projected by the structured light projector 22a and the structured light projector 22 c; for the structured light assembly 20c, the spots projected by the structured light projector 22b and the structured light projector 22d, which are excessive in the laser pattern collected by the structured light camera 24c, can also be used for the calculation of the depth information, and correspondingly, the reference image of the structured light assembly 20c should also include the spots projected by the structured light projector 22b and the structured light projector 22 d; in the structured light module 20d, spots projected by the structured light projector 22c and the structured light projector 22a, which are excessive in the laser pattern collected by the structured light camera 24d, can also be used for the calculation of the depth information, and correspondingly, the reference image of the structured light module 20d should also include spots projected by the structured light projector 22c and the structured light projector 22 a. Therefore, when calibrating the reference images of the structured light assembly 20a, the structured light assembly 20b, the structured light assembly 20c, and the structured light assembly 20d, the structured light projector 22a, the structured light projector 22b, the structured light projector 22c, and the structured light projector 22d should be mounted on the body 10 and project laser patterns simultaneously, so that the structured light camera 24a can collect all laser patterns projected by the structured light projector 22a and part laser patterns projected by the structured light projector 22b and the structured light projector 22d simultaneously, the structured light camera 24b can collect all laser patterns projected by the structured light projector 22b and part laser patterns projected by the structured light projector 22a and the structured light projector 22c simultaneously, and the structured light camera 24c can collect all laser patterns projected by the structured light projector 22c and part laser patterns projected by the structured light projector 22b and the structured light projector 22d simultaneously The structured light camera 24d can collect all the laser patterns projected by the structured light projector 22d and part of the laser patterns projected by the structured light projector 22c and the structured light projector 22 a. Speckle augmentation of the laser pattern used to calculate depth information is beneficial for increasing the amount and accuracy of depth information.
Referring to fig. 1 and 2, a camera assembly 30 is disposed on the body 10. The number of camera assemblies 30 may be multiple, one structured light assembly 20 for each camera assembly 30. For example, when the number of structured light assemblies 20 is four, the number of camera assemblies 30 is also four, and the four camera assemblies 30 are disposed in the first orientation, the second orientation, the third orientation, and the fourth orientation, respectively.
A plurality of camera head assemblies 30 are each connected to an application processor 50. Each camera assembly 30 is used to capture a scene image of a subject and output to the application processor 50. In the present embodiment, the four camera assemblies 30 are respectively used for capturing the scene image of the subject in the first direction, the scene image of the subject in the second direction, the scene image of the subject in the third direction, and the scene image of the subject in the fourth direction, and outputting the captured images to the application processor 50. It will be appreciated that the field angle of each camera assembly 30 is the same or approximately the same as the structured light camera 24 of the corresponding structured light assembly 20 to enable each scene image to better match the corresponding initial depth image.
The camera assembly 30 may be a visible light camera 32 or an infrared light camera 34. When camera assembly 30 is a visible light camera 32, the scene image is a visible light image; when camera assembly 30 is an infrared camera 34, the scene image is an infrared light image.
Referring to FIG. 2, the microprocessor 40 may be a processing chip. The number of microprocessors 40 may be multiple, with one structured light assembly 20 for each microprocessor 40. For example, in the present embodiment, when the number of the structured light units 20 is four, the number of the microprocessors 40 is also four. Each microprocessor 40 is connected to both the structured light projector 22 and the structured light camera 24 in the corresponding structured light assembly 20. Each microprocessor 40 can drive the corresponding structured light projector 22 to project laser light through the driving circuit, and the multiple structured light projectors 22 can project laser light simultaneously through the control of the multiple microprocessors 40. Each microprocessor 40 is also configured to provide clock information for capturing laser light patterns to the corresponding structured light camera 24 to enable exposure of the structured light camera 24 and to enable simultaneous exposure of the plurality of structured light cameras 24 through control of the plurality of microprocessors 40. The four microprocessors 40 are also used to process the laser patterns collected by the corresponding structured light camera 24 to obtain an initial depth image. For example, the plurality of microprocessors 40 process the laser pattern captured by the structured light camera 24a to obtain an initial depth image P1, process the laser pattern captured by the structured light camera 24b to obtain an initial depth image P2, process the laser pattern captured by the structured light camera 24c to obtain an initial depth image P3, and process the laser pattern captured by the structured light camera 24d to obtain an initial depth image P4, respectively (as shown in the upper portion of fig. 8). Each microprocessor 40 may also perform algorithm processing such as tiling, distortion correction, self-calibration, etc. on the initial depth image to improve the quality of the initial depth image.
It will be appreciated that the number of the microprocessors 40 may be one, and in this case, the microprocessors 40 need to process the laser patterns collected by the plurality of structured light cameras 24 in sequence to obtain the initial depth image. The plurality of microprocessors 40 have a faster processing speed and a smaller delay time than one microprocessor 40.
The plurality of microprocessors 40 are each connected to the application processor 50 to transmit the initial depth image to the application processor 50. In one embodiment, the microprocessor 40 may be connected to the application Processor 50 through a Mobile Industry Processor Interface (MIPI), and specifically, the microprocessor 40 is connected to a Trusted Execution Environment (TEE) of the application Processor 50 through the Mobile Industry Processor Interface, so as to directly transmit data (initial depth image) in the microprocessor 40 to the TEE, so as to improve the security of information in the electronic device 100. Here, both the code and the memory area in the trusted Execution Environment are controlled by the access control unit and cannot be accessed by a program in the untrusted Execution Environment (REE), and both the trusted Execution Environment and the untrusted Execution Environment may be formed in the application processor 50.
The application processor 50 may function as a system of the electronic device 100. The application processor 50 may reset the microprocessor 40, wake the microprocessor 40, debug the microprocessor 40, and so on. The application processor 50 may also be connected to a plurality of electronic components of the electronic device 100 and control the plurality of electronic components to operate according to a predetermined mode, for example, the application processor 50 is connected to the visible light camera 32 and the infrared light camera 34 to control the visible light camera 32 and the infrared light camera 34 to capture a visible light image and an infrared light image and process the visible light image and the infrared light image; when the electronic apparatus 100 includes a display screen, the application processor 50 may control the display screen to display a predetermined screen; the application processor 50 may also control an antenna of the electronic device 100 to transmit or receive predetermined data or the like.
Referring to fig. 8, in one embodiment, the application processor 50 is configured to synthesize a plurality of initial depth images acquired by the plurality of microprocessors 40 into a frame of panoramic depth image according to the field angle of the structured light camera 24.
Specifically, referring to fig. 1, a rectangular coordinate system XOY is established with the center of the body 10 as a center O, the horizontal axis as an X axis, and the longitudinal axis as a Y axis, in the rectangular coordinate system XOY, the field of view of the structured light camera 24a is located between 45 degrees and 315 degrees (clockwise rotation, the same applies), the field of view of the structured light camera 24b is located between 315 degrees and 225 degrees, the field of view of the structured light camera 24c is located between 225 degrees and 135 degrees, and the field of view of the structured light camera 24d is located between 135 degrees and 45 degrees, and then the application processor 50 sequentially splices the initial depth image P1, the initial depth image P2, the initial depth image P3, and the initial depth image P4 into a 360-degree panoramic depth image P1234 of one frame according to the field angles of the four structured light cameras 24, so as to use the depth information.
In the initial depth image obtained by processing the laser pattern collected by the corresponding structured light camera 24 by each microprocessor 40, the depth information of each pixel is the distance between the subject in the corresponding direction and the structured light camera 24 in the direction. That is, the depth information of each pixel in the initial depth image P1 is the distance between the subject at the first orientation and the structured light camera 24 a; the depth information of each pixel in the initial depth image P2 is the distance between the subject at the second orientation and the structured light camera 24 b; the depth information of each pixel in the initial depth image P3 is the distance between the subject in the third orientation and the structured light camera 24 c; the depth information of each pixel in the initial depth image P4 is the distance between the subject at the fourth orientation and the structured light camera 24 d. In the process of splicing a plurality of initial depth images of a plurality of azimuths into a 360-degree panoramic depth image of one frame, firstly, the depth information of each pixel in each initial depth image is converted into unified depth information, and the unified depth information represents the distance between each object to be shot and a certain reference position in each azimuth. After the depth information is converted into the unified depth information, the application processor 50 is convenient to perform the splicing of the initial depth image according to the unified depth information.
Specifically, one reference coordinate system is selected, and the reference coordinate system may be an image coordinate system of the structured light camera 24 in a certain orientation as the reference coordinate system, or another coordinate system may be selected as the reference coordinate system. Take FIG. 9 as an example, take xo-yo-zoThe coordinate system is a reference coordinate system. Coordinate system x shown in fig. 9a-ya-zaIs the image coordinate system of the structured light camera 24a, coordinate system xb-yb-zbIs the image coordinate system of the structured light camera 24b, coordinate system xc-yc-zcIs the image coordinate system of the structured light camera 24c, coordinate system xd-yd-zdIs the image coordinate system of the structured light camera 24 d. The application processor 50 is based on a coordinate system xa-ya-zaWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix between convert the depth information of each pixel in the initial depth image P1 into unified depth information according to the coordinate system xb-yb-zbWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix between convert the depth information of each pixel in the initial depth image P2 into unified depth information according to the coordinate system xc-yc-zcWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix in between convert the depth information of each pixel in the initial depth image P3 into unified depth information; according to a coordinate system xd-yd-zdWith reference coordinate system xo-yo-zoThe rotation matrix and the translation matrix in between convert the depth information of each pixel in the initial depth image P4 into unified depth information.
After the depth information conversion is completed, a plurality of initial depth images are positioned in a unified reference coordinate system, and one pixel point of each initial depth image corresponds to one coordinate (x)o,yo,zo) Then the stitching of the initial depth images can be done by coordinate matching. For example, a certain pixel point P in the initial depth image P1aHas the coordinates of(xo1,yo1,zo1) In the initial depth image P2, a certain pixel point PbAlso has the coordinate of (x)o1,yo1,zo1) Due to PaAnd PbIf the coordinate values are the same in the current reference coordinate system, the pixel point P is indicatedaAnd pixel point PbWhen the initial depth image P1 and the initial depth image P2 are spliced at the same point, a pixel point P isaNeeds and pixel point PbAnd (4) overlapping. Thus, the application processor 50 can perform stitching of a plurality of initial depth images through the matching relationship of the coordinates, and obtain a 360-degree panoramic depth image.
It should be noted that, performing the stitching of the initial depth image based on the matching relationship of the coordinates requires that the resolution of the initial depth image needs to be greater than a preset resolution. It can be appreciated that if the resolution of the initial depth image is low, the coordinate (x) iso,yo,zo) Will also be relatively low, in which case matching directly from the coordinates may occur PaPoint sum PbThe points do not actually coincide but differ by an offset, and the value of the offset exceeds the error limit. If the resolution of the image is high, the coordinate (x)o,yo,zo) Will be relatively high, in which case the matching is done directly from the coordinates, even if P isaPoint sum PbThe points are not actually overlapped and differ by an offset, but the value of the offset is smaller than an error limit value, namely, the offset is within an error allowable range, and the splicing of the initial depth image cannot be greatly influenced.
It is to be understood that the following embodiments may adopt the above-mentioned manner to splice or synthesize two or more initial depth images, and are not described one by one.
The application processor 50 may further combine the plurality of initial depth images and the corresponding plurality of visible light images into a three-dimensional scene image for display for viewing by a user. For example, the plurality of visible light images are a visible light image V1, a visible light image V2, a visible light image V3, and a visible light image V4, respectively. The processor 50 is used to synthesize the initial depth image P1 and the visible light image V1, synthesize the initial depth image P2 and the visible light image V2, synthesize the initial depth image P3 and the visible light image V3, synthesize the initial depth image P4 and the visible light image V4, and then splice the synthesized four images to obtain a 360-degree three-dimensional scene image of one frame. Or, the application processor 50 firstly splices the initial depth image P1, the initial depth image P2, the initial depth image P3 and the initial depth image P4 to obtain a 360-degree panoramic depth image of one frame, and splices the visible light image V1, the visible light image V2, the visible light image V3 and the visible light image V4 to obtain a 360-degree panoramic visible light image of one frame; and then the panoramic depth image and the panoramic visible light image are synthesized into a 360-degree three-dimensional scene image.
Referring to fig. 10, in one embodiment, application processor 50 is configured to identify a subject based on a plurality of initial depth images acquired by a plurality of microprocessors 40 and a plurality of scene images captured by a plurality of camera assemblies 30.
Specifically, when the scene image is an infrared light image, the plurality of infrared light images may be an infrared light image I1, an infrared light image I2, an infrared light image I3, and an infrared light image I4, respectively. The application processor 50 identifies a first-position subject from the initial depth image P1 and the infrared light image I1, a second-position subject from the initial depth image P2 and the infrared light image I2, a third-position subject from the initial depth image P3 and the infrared light image I3, and a fourth-position subject from the initial depth image P4 and the infrared light image I4, respectively. When the scene image is a visible light image, the plurality of visible light images are a visible light image V1, a visible light image V2, a visible light image V3, and a visible light image V4, respectively. The application processor 50 identifies a first bearing of a photographic target from the initial depth image P1 and the visible light image V1, a second bearing of a photographic target from the initial depth image P2 and the visible light image V2, a third bearing of a photographic target from the initial depth image P3 and the visible light image V3, and a fourth bearing of a photographic target from the initial depth image P4 and the visible light image V4, respectively.
When the photographic subject is identified as face recognition, the application processor 50 performs face recognition with higher accuracy using the infrared light image as the scene image. The process of face recognition by the application processor 50 from the initial depth image and the infrared light image may be as follows:
firstly, face detection is carried out according to the infrared light image to determine a target face area. Because the infrared light image comprises the detail information of the scene, after the infrared light image is acquired, the human face detection can be carried out according to the infrared light image, so that whether the infrared light image contains the human face or not can be detected. And if the infrared light image contains the human face, extracting a target human face area where the human face is located in the infrared light image.
Then, the living body detection processing is performed on the target face region according to the initial depth image. Because each initial depth image corresponds to the infrared light image, and the initial depth image includes the depth information of the corresponding infrared light image, the depth information corresponding to the target face area can be acquired according to the initial depth image. Further, since the living body face is stereoscopic and the face displayed, for example, on a picture, a screen, or the like, is planar, it is possible to determine whether the target face region is stereoscopic or planar according to the acquired depth information of the target face region, thereby performing living body detection on the target face region.
And if the living body detection is successful, acquiring target face attribute parameters corresponding to the target face area, and performing face matching processing on the target face area in the infrared light image according to the target face attribute parameters to obtain a face matching result. The target face attribute parameters refer to parameters capable of representing attributes of a target face, and the target face can be identified and matched according to the target face attribute parameters. The target face attribute parameters include, but are not limited to, face deflection angles, face brightness parameters, facial features parameters, skin quality parameters, geometric feature parameters, and the like. The electronic apparatus 100 may previously store the face attribute parameters for matching. After the target face attribute parameters are acquired, the target face attribute parameters can be compared with the face attribute parameters stored in advance. And if the target face attribute parameters are matched with the pre-stored face attribute parameters, the face recognition is passed.
It should be noted that the specific process of the application processor 50 performing face recognition according to the initial depth image and the infrared light image is not limited to this, for example, the application processor 50 may also assist in detecting a face contour according to the initial depth image to improve face recognition accuracy, and the like. The process of the application processor 50 performing face recognition based on the initial depth image and the visible light image is similar to the process of the application processor 50 performing face recognition based on the initial depth image and the infrared light image, and will not be further described herein.
Referring to fig. 10 and 11, the application processor 50 is further configured to combine at least two initial depth images acquired by the at least two microprocessors 40 into a frame of combined depth image according to the field angle of the structured light camera 24 when the recognition of the object fails according to the multiple initial depth images and the multiple scene images, combine at least two scene images acquired by the at least two camera assemblies 30 into a frame of combined scene image, and recognize the object according to the combined depth image and the combined scene image.
Specifically, in the embodiment shown in fig. 10 and 11, since the field angle of the structured light camera 24 of each structured light assembly 20 is limited, and there may be a case where half of a human face is located in the initial depth image P2 and the other half is located in the initial depth image P3, the application processor 50 synthesizes the initial depth image P2 and the initial depth image P3 into one frame of merged depth image P23, and correspondingly synthesizes the infrared light image I2 and the infrared light image I3 (or the visible light image V2 and the visible light image V3) into one frame of merged scene image I23 (or V23) to re-identify a subject from the merged depth image P23 and the merged scene image I23 (or V23).
It is understood that when the subject is distributed in more initial depth images at the same time, the application processor 50 may synthesize more initial depth images (corresponding to different orientations) into one frame of merged depth image, and correspondingly synthesize more infrared light images (corresponding to different orientations) or visible light images (corresponding to different orientations) into one frame of merged scene image, so as to re-identify the subject.
Referring to fig. 12 and 13, in an embodiment, the application processor 50 is configured to determine a distance variation between the subject and the electronic device 100 according to a plurality of initial depth images.
Specifically, each structured light camera 24 may capture the laser light pattern multiple times. For example, at a first time, the structured light camera 24a, the structured light camera 24b, the structured light camera 24c and the structured light camera 24d collect laser light patterns, and the plurality of microprocessors 40 correspondingly obtain an initial depth image P11, an initial depth image P21, an initial depth image P31 and an initial depth image P41; at the second moment, the structured light camera 24a, the structured light camera 24b, the structured light camera 24c and the structured light camera 24d collect laser light patterns, and the plurality of microprocessors 40 correspondingly obtain an initial depth image P12, an initial depth image P22, an initial depth image P32 and an initial depth image P42. Then, the application processor 50 determines a distance change between the subject at the first orientation and the electronic device 100 from the initial depth image P11 and the initial depth image P12, respectively; judging the distance change between the shot target at the second position and the electronic equipment 100 according to the initial depth image P21 and the initial depth image P22; judging the distance change between the shot target in the third direction and the electronic equipment 100 according to the initial depth image P31 and the initial depth image P32; and judging the distance change between the shot target at the fourth position and the electronic equipment 100 according to the initial depth image P41 and the initial depth image P42.
It is understood that, since the depth information of the subject is included in the initial depth image, the application processor 50 may determine a distance change between the subject corresponding to the orientation and the electronic apparatus 100 from a depth information change at a plurality of consecutive times.
Referring to fig. 14, the application processor 50 is further configured to combine at least two initial depth images acquired by at least two microprocessors 40 into one merged depth image according to the field angle of the structured light camera 24 when determining that the distance variation fails according to the multiple initial depth images, and the application processor 50 continuously performs the combining step to obtain multiple frames of continuous merged depth images and determines the distance variation according to the multiple frames of merged depth images.
Specifically, in the embodiment shown in fig. 14, since the field angle of the structured light camera 24 of each structured light assembly 20 is limited, and there may be a case where half of a human face is located in the initial depth image P21 and the other half is located in the initial depth image P31, the application processor 50 synthesizes the initial depth image P21 and the initial depth image P31 at the first time into one merged depth image P231 and correspondingly synthesizes the initial depth image P22 and the initial depth image P32 at the second time into one merged depth image P232, and then re-judges the distance change according to the two merged depth images P231 and P232.
It is to be understood that, when the subject is distributed in more initial depth images at the same time, the application processor 50 may synthesize the more initial depth images (corresponding to different orientations) into one frame of a merged depth image, and continuously perform the synthesizing step for a plurality of time instants.
Referring to fig. 13, when it is determined that the distance is decreased according to the plurality of initial depth images or when it is determined that the distance is decreased according to the multi-frame merged depth image, the application processor 50 increases a frame rate of the initial depth image collected from the plurality of initial depth images transmitted from the at least one microprocessor 40 for determining the distance change.
It is understood that when the distance between the subject and the electronic apparatus 100 decreases, the electronic apparatus 100 cannot predict whether the distance decreases, and therefore, the application processor 50 may increase the frame rate of the initial depth image collected from the plurality of initial depth images transmitted from the at least one microprocessor 40 to determine the distance change, so as to more closely focus on the distance change. Specifically, when determining that the distance corresponding to a certain orientation decreases, the application processor 50 may increase the frame rate of the initial depth image acquired from the plurality of initial depth images transmitted by the microprocessor 40 for determining the distance change in the orientation.
For example, at a first time, the plurality of microprocessors 40 obtain an initial depth image P11, an initial depth image P21, an initial depth image P31, an initial depth image P41, respectively; at the second time, the plurality of microprocessors 40 respectively obtain the initial depth image P12, the initial depth image P22, the initial depth image P32 and the initial depth image P42; at the third time, the plurality of microprocessors 40 obtain the initial depth image P13, the initial depth image P23, the initial depth image P33, and the initial depth image P43, respectively; at the fourth time, the plurality of microprocessors 40 obtain the initial depth image P14, the initial depth image P24, the initial depth image P34, and the initial depth image P44, respectively.
Under normal circumstances, the application processor 50 selects an initial depth image P11 and an initial depth image P14 to judge the distance change between the subject at the first orientation and the electronic device 100; selecting an initial depth image P21 and an initial depth image P24 to judge the distance change between the shot target in the second direction and the electronic device 100; selecting an initial depth image P31 and an initial depth image P34 to judge the distance change between the shot target in the third direction and the electronic device 100; the initial depth image P41 and the initial depth image P44 are selected to judge the distance change between the subject in the fourth direction and the electronic device 100. The frame rate of the application processor 50 for acquiring the initial depth image in each direction is one frame acquired every two frames, that is, one frame is selected every three frames.
When the distance corresponding to the first direction is determined to decrease according to the initial depth image P11 and the initial depth image P14, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine the distance between the subject in the first direction and the electronic device 100. The frame rate at which the application processor 50 acquires the initial depth image of the first orientation is changed to acquire one frame every other frame, i.e., one frame is selected every two frames. The frame rates of other orientations are kept unchanged, that is, the application processor 50 still selects the initial depth image P21 and the initial depth image P24 to judge the distance change; selecting an initial depth image P31 and an initial depth image P34 to judge distance change; and selecting an initial depth image P41 and an initial depth image P44 to judge the distance change.
When the distance corresponding to the first position is determined to decrease according to the initial depth image P11 and the initial depth image P14, and the distance corresponding to the second position is determined to decrease according to the initial depth image P21 and the initial depth image P24, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine the distance change between the object in the first position and the electronic device 100, selects the initial depth image P21 and the initial depth image P23 to determine the distance change between the object in the second position and the electronic device 100, and the frame rate of acquiring the initial depth images in the first position and the second position by the application processor 50 is changed to one frame per frame interval, that is, one frame per two frames is selected. The frame rates of other orientations are kept unchanged, that is, the application processor 50 still selects the initial depth image P31 and the initial depth image P34 to determine the distance change between the subject in the third orientation and the electronic device 100; the initial depth image P41 and the initial depth image P44 are selected to judge the distance change between the subject in the fourth direction and the electronic device 100.
Of course, the application processor 50 may also increase the frame rate of the initial depth image collected from the plurality of initial depth images transmitted from each microprocessor 40 to determine the distance change when determining that the distance corresponding to any one of the orientations decreases. Namely: when the distance between the object in the first position and the electronic device 100 is determined to be decreased according to the initial depth image P11 and the initial depth image P14, the application processor 50 selects the initial depth image P11 and the initial depth image P13 to determine a distance change between the object in the first position and the electronic device 100, selects the initial depth image P21 and the initial depth image P23 to determine a distance change between the object in the second position and the electronic device 100, selects the initial depth image P31 and the initial depth image P33 to determine a distance change between the object in the third position and the electronic device 100, and selects the initial depth image P41 and the initial depth image P43 to determine a distance change between the object in the fourth position and the electronic device 100.
The application processor 50 may also determine the change in distance as the distance decreases, in conjunction with the visible light image or the infrared light image. Specifically, the application processor 50 identifies the photographic subject from the visible light image or the infrared light image, and then determines the distance change from the initial depth image at a plurality of times, thereby controlling the electronic apparatus 100 to perform different operations with respect to different photographic subjects and different distances. Alternatively, the microprocessor 40 controls to increase the frequency of the laser light projected by the corresponding structured light projector 22 and the exposure of the structured light camera 24 when the distance decreases.
It should be noted that the electronic device 100 of the present embodiment may also be used as an external terminal, and may be fixedly mounted or detachably mounted on a portable electronic device such as a mobile phone, a tablet computer, a notebook computer, etc., or may be fixedly mounted on a movable object such as a vehicle body (as shown in fig. 11 and 12), an unmanned aerial vehicle body, a robot body, or a ship body. When the electronic device 100 is used specifically, a frame of panoramic depth image is synthesized according to the plurality of initial depth images as described above, and the panoramic depth image may be used for three-dimensional modeling, instant positioning and mapping (SLAM), and augmented reality display. When the electronic device 100 recognizes a subject as described above, the method may be applied to face recognition unlocking and payment of a portable electronic device, or applied to obstacle avoidance of a robot, a vehicle, an unmanned aerial vehicle, a ship, or the like. When the electronic apparatus 100 determines that the distance between the subject and the electronic apparatus 100 changes as described above, the present invention can be applied to automatic travel, object tracking, and the like of robots, vehicles, unmanned planes, ships, and the like.
Referring to fig. 2 and fig. 15, the present embodiment further provides a mobile platform 300. The mobile platform 300 includes a body 10 and a plurality of structured light assemblies 20 disposed on the body 10. The plurality of structured light assemblies 20 are respectively positioned at a plurality of different orientations of the body 10. Each structured light assembly 20 includes a structured light projector 22 and a structured light camera 24. The structured light projector 22 is used for projecting laser patterns to the outside of the body 10, and the structured light camera 24 is used for collecting the laser patterns projected by the corresponding structured light projector 22 and reflected by a photographed object. The structured light projectors 22 in the plurality of structured light assemblies 20 project laser light simultaneously and the structured light cameras 24 in the plurality of structured light assemblies 20 are exposed simultaneously to acquire a panoramic depth image.
Specifically, the body 10 may be a vehicle body, an unmanned aerial vehicle fuselage, a robot body, or a ship body.
Referring to fig. 15, when the body 10 is a vehicle body, the number of the structured light assemblies 20 is four, and the four structured light assemblies 20 are respectively mounted on four sides of the vehicle body, for example, a front end, a rear end, a left side of the vehicle body, and a right side of the vehicle body. The vehicle body can drive the plurality of structured light assemblies 20 to move on a road, and a 360-degree panoramic depth image on a traveling route is constructed to be used as a reference map and the like; or acquiring a plurality of initial depth images in different directions to identify the photographed target, and determining the distance change between the photographed target and the mobile platform 300, so as to control the vehicle body to accelerate, decelerate, stop, detour, and the like, thereby implementing unmanned obstacle avoidance. In this way, different operations are performed according to different photographic subjects when the distance decreases, and the vehicle can be made more intelligent.
Referring to fig. 16, when the main body 10 is an unmanned aerial vehicle body, the number of the plurality of structured light assemblies 20 is four, and the four structured light assemblies 20 are respectively installed on the front side, the rear side, the left side, and the right side of the unmanned aerial vehicle body, or on the front side, the rear side, the left side, and the right side of a cradle head carried on the unmanned aerial vehicle body. The unmanned aerial vehicle fuselage can drive a plurality of structured light subassemblies 20 and fly in the air to take photo by plane, patrol and examine etc. unmanned aerial vehicle can return the panorama depth image who obtains and give ground control end, also can directly carry out SLAM. A plurality of structured light assemblies 20 can realize that unmanned aerial vehicle accelerates, slows down, stops, keeps away barrier, object tracking.
Referring to fig. 17, when the main body 10 is a robot main body, for example, a sweeping robot, the number of the structured light assemblies 20 is four, and the four structured light assemblies 20 are respectively installed at the front, rear, left, and right sides of the robot main body. The robot body can drive the plurality of structured light assemblies 20 to move at home, and initial depth images in a plurality of different directions are acquired, so that a shot target is identified, the distance change between the shot target and the moving platform 300 is judged, the robot body is controlled to move, and the robot is enabled to clear away garbage, avoid obstacles and the like.
Referring to fig. 18, when the body 10 is a ship body, the number of the structured light assemblies 20 is four, and the four structured light assemblies 20 are respectively installed on the front, rear, left, and right sides of the ship body. The ship body can drive the structured light assembly 20 to move, and initial depth images in a plurality of different directions are acquired, so that a shot target can be accurately identified in a severe environment (for example, in a foggy environment), the change of the distance between the shot target and the mobile platform 300 is judged, and the safety of marine navigation is improved.
The mobile platform 300 according to the embodiment of the present application is a platform capable of moving independently, and the plurality of structured light assemblies 20 are mounted on the body 10 of the mobile platform 300 to obtain a panoramic depth image. However, the electronic device 100 of the embodiment of the present application is generally not independently movable, and the electronic device 100 may be further mounted on a movable apparatus such as the mobile platform 300, thereby assisting the apparatus in acquiring the panoramic depth image.
It should be noted that the above explanations of the body 10, the structured light assembly 20, the camera assembly 30, the microprocessor 40 and the application processor 50 of the electronic device 100 are also applicable to the mobile platform 300 of the embodiment of the present application, and the descriptions thereof are not repeated here.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations of the above embodiments may be made by those of ordinary skill in the art within the scope of the present application, which is defined by the claims and their equivalents.

Claims (14)

1. An electronic device, characterized in that the electronic device comprises:
a body;
the structured light components are arranged on the body and respectively positioned at a plurality of different directions of the body, each structured light component comprises a structured light projector and a structured light camera, the structured light projector is used for projecting laser patterns to the outside of the body, and the structured light camera is used for collecting the laser patterns projected by the corresponding structured light projector reflected by a shot target;
the structured light projectors in the plurality of structured light assemblies project laser light simultaneously, and the structured light cameras in the plurality of structured light assemblies are exposed simultaneously to obtain a panoramic depth image;
each microprocessor is used for processing the laser pattern collected by the structured light camera of the corresponding structured light assembly to obtain an initial depth image; and
the application processor is connected with the microprocessors and receives the initial depth images, after the application processor converts the depth information of each pixel in each initial depth image into unified depth information under a reference coordinate system according to a rotation matrix and a translation matrix between each image coordinate system and the reference coordinate system, any pixel point in each initial depth image corresponds to a coordinate value, and the application processor is used for splicing the converted initial depth images through coordinate matching according to the unified depth information to obtain a panoramic depth image; when a plurality of initial depth images are spliced, if the pixel points with the same coordinate value exist and the resolution of the initial depth images corresponding to the pixel points is larger than the preset resolution, the pixel points with the same coordinate value are overlapped.
2. The electronic device of claim 1 wherein the laser light patterns projected by the structured light projectors of adjacent orientations are different.
3. The electronic device of claim 1 wherein the structured light assembly comprises four, and the field angle of each structured light projector and each structured light camera is any value from 80 degrees to 100 degrees.
4. The electronic device of claim 2 wherein the laser light pattern projected by each of the structured light projectors is different.
5. The electronic device of claim 2 or 4 wherein each of the structured light projectors comprises a plurality of light emitting elements; at least one of an arrangement, a shape, or a size of the plurality of light emitting elements is different between different structured light projectors, such that the laser light patterns projected by different structured light projectors are different.
6. The electronic device of claim 2 or 4, wherein each of the structured light projectors comprises a diffractive optical element comprising a diffractive body and a diffractive structure formed on the diffractive body; the diffractive structure is different between different ones of the structured light projectors, such that the laser light patterns projected by different ones of the structured light projectors are different.
7. The electronic device of claim 1, wherein the application processor is configured to synthesize the initial depth images obtained by the microprocessor into a frame of the panoramic depth image according to a field angle of the structured light camera.
8. The electronic device of claim 1, further comprising a plurality of camera assemblies disposed on the body, each camera assembly corresponding to one of the structured light assemblies, the plurality of camera assemblies each being connected to the application processor, each camera assembly being configured to capture a scene image of the object and output the scene image to the application processor;
the application processor is used for identifying the shot target according to a plurality of initial depth images acquired by the microprocessors and a plurality of scene images acquired by the camera assemblies.
9. The electronic device according to claim 8, wherein the application processor is further configured to, when the recognition of the object fails according to a plurality of the initial depth images and a plurality of the scene images, combine at least two of the initial depth images acquired by at least two of the microprocessors into one frame of merged depth image according to a field angle of the structured light camera, combine at least two of the scene images acquired by at least two of the camera assemblies into one frame of merged scene image, and recognize the object according to the merged depth image and the merged scene image.
10. The electronic device of claim 1, wherein the application processor is configured to determine a change in distance between the subject and the electronic device from a plurality of the initial depth images.
11. The electronic device according to claim 10, wherein the application processor is further configured to synthesize at least two of the initial depth images acquired by at least two of the microprocessors into one frame of merged depth image according to a field angle of the structured light camera when determining that the distance change fails according to a plurality of the initial depth images, and the application processor continuously performs the synthesizing step to obtain a plurality of frames of consecutive merged depth images and determines the distance change according to the plurality of frames of the merged depth images.
12. The electronic device according to claim 10 or 11, wherein the application processor is further configured to increase a frame rate of the initial depth image acquired from the plurality of initial depth images transmitted from the at least one microprocessor to determine the distance change when the distance change is determined to be a distance decrease.
13. A mobile platform, comprising:
a body;
the structured light components are arranged on the body and respectively positioned at a plurality of different directions of the body, each structured light component comprises a structured light projector and a structured light camera, the structured light projector is used for projecting laser patterns to the outside of the body, and the structured light camera is used for collecting the laser patterns projected by the corresponding structured light projector reflected by a shot target;
the structured light projectors in the plurality of structured light assemblies project laser light simultaneously, and the structured light cameras in the plurality of structured light assemblies are exposed simultaneously to obtain a panoramic depth image;
each microprocessor is used for processing the laser pattern collected by the structured light camera of the corresponding structured light assembly to obtain an initial depth image; and
the application processor is connected with the microprocessors and receives the initial depth images, after the application processor converts the depth information of each pixel in each initial depth image into unified depth information under a reference coordinate system according to a rotation matrix and a translation matrix between each image coordinate system and the reference coordinate system, any pixel point in each initial depth image corresponds to a coordinate value, and the application processor is used for splicing the converted initial depth images through coordinate matching according to the unified depth information to obtain a panoramic depth image; when a plurality of initial depth images are spliced, if the pixel points with the same coordinate value exist and the resolution of the initial depth images corresponding to the pixel points is larger than the preset resolution, the pixel points with the same coordinate value are overlapped.
14. The mobile platform of claim 13, wherein the body is a vehicle body, an unmanned aerial vehicle fuselage, a robot body, or a ship body.
CN201910007854.5A 2019-01-04 2019-01-04 Electronic equipment and mobile platform Active CN109788196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910007854.5A CN109788196B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910007854.5A CN109788196B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Publications (2)

Publication Number Publication Date
CN109788196A CN109788196A (en) 2019-05-21
CN109788196B true CN109788196B (en) 2021-07-23

Family

ID=66499901

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910007854.5A Active CN109788196B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Country Status (1)

Country Link
CN (1) CN109788196B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106443985A (en) * 2015-08-07 2017-02-22 高准精密工业股份有限公司 Method of scaling a structured light pattern and optical device using same
CN106991716A (en) * 2016-08-08 2017-07-28 深圳市圆周率软件科技有限责任公司 A kind of panorama three-dimensional modeling apparatus, method and system
CN107263480A (en) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 A kind of robot manipulation's method and robot
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN108471487A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 Generate the image device and associated picture device of panoramic range image
CN108490633A (en) * 2018-03-12 2018-09-04 广东欧珀移动通信有限公司 Structured light projector, depth camera and electronic equipment
CN108493767A (en) * 2018-03-12 2018-09-04 广东欧珀移动通信有限公司 Laser generator, structured light projector, image obtain structure and electronic device
CN108965751A (en) * 2017-05-25 2018-12-07 钰立微电子股份有限公司 For generating the image device of 360 degree of depth maps

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10848731B2 (en) * 2012-02-24 2020-11-24 Matterport, Inc. Capturing and aligning panoramic image and depth data
CN104159026A (en) * 2014-08-07 2014-11-19 厦门亿联网络技术股份有限公司 System for realizing 360-degree panoramic video
US10003740B2 (en) * 2015-07-13 2018-06-19 Futurewei Technologies, Inc. Increasing spatial resolution of panoramic video captured by a camera array
WO2018134796A1 (en) * 2017-01-23 2018-07-26 Hangzhou Zero Zero Technology Co., Ltd. System and method for omni-directional obstacle avoidance in aerial systems
CN107393011A (en) * 2017-06-07 2017-11-24 武汉科技大学 A kind of quick three-dimensional virtual fitting system and method based on multi-structured light vision technique
CN107580208B (en) * 2017-08-24 2020-06-23 上海视智电子科技有限公司 Cooperative work system and method of multi-depth measuring equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106443985A (en) * 2015-08-07 2017-02-22 高准精密工业股份有限公司 Method of scaling a structured light pattern and optical device using same
CN106991716A (en) * 2016-08-08 2017-07-28 深圳市圆周率软件科技有限责任公司 A kind of panorama three-dimensional modeling apparatus, method and system
CN108471487A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 Generate the image device and associated picture device of panoramic range image
CN108965751A (en) * 2017-05-25 2018-12-07 钰立微电子股份有限公司 For generating the image device of 360 degree of depth maps
CN107263480A (en) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 A kind of robot manipulation's method and robot
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN108490633A (en) * 2018-03-12 2018-09-04 广东欧珀移动通信有限公司 Structured light projector, depth camera and electronic equipment
CN108493767A (en) * 2018-03-12 2018-09-04 广东欧珀移动通信有限公司 Laser generator, structured light projector, image obtain structure and electronic device

Also Published As

Publication number Publication date
CN109788196A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
JP4825980B2 (en) Calibration method for fisheye camera.
US10990836B2 (en) Method and apparatus for recognizing object, device, vehicle and medium
CN109618108B (en) Electronic equipment and mobile platform
JP2612097B2 (en) Method and system for automatically determining the position and orientation of an object in three-dimensional space
CN110572630B (en) Three-dimensional image shooting system, method, device, equipment and storage medium
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN109862275A (en) Electronic equipment and mobile platform
KR102436730B1 (en) Method and apparatus for estimating parameter of virtual screen
US20200192206A1 (en) Structured light projector, three-dimensional camera module and terminal device
JP2007024647A (en) Distance calculating apparatus, distance calculating method, structure analyzing apparatus and structure analyzing method
Zhang Camera parameters (intrinsic, extrinsic)
JP2012063350A (en) Positioning processing device, positioning processing method, image processing device, and image processing method
KR20200067641A (en) Calibration method for 3d augmented reality and apparatus thereof
CN113034612B (en) Calibration device, method and depth camera
CN109660731B (en) Electronic equipment and mobile platform
CN109587304B (en) Electronic equipment and mobile platform
CN109803089B (en) Electronic equipment and mobile platform
CN109618085B (en) Electronic equipment and mobile platform
CN109788196B (en) Electronic equipment and mobile platform
CN109660733B (en) Electronic equipment and mobile platform
CN109756660B (en) Electronic equipment and mobile platform
CN109587303B (en) Electronic equipment and mobile platform
CN109788172A (en) Electronic equipment and mobile platform
CN109788195B (en) Electronic equipment and mobile platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant