CN109660731A - Electronic equipment and mobile platform - Google Patents

Electronic equipment and mobile platform Download PDF

Info

Publication number
CN109660731A
CN109660731A CN201910007544.3A CN201910007544A CN109660731A CN 109660731 A CN109660731 A CN 109660731A CN 201910007544 A CN201910007544 A CN 201910007544A CN 109660731 A CN109660731 A CN 109660731A
Authority
CN
China
Prior art keywords
flight time
initial depth
electronic equipment
depth image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910007544.3A
Other languages
Chinese (zh)
Other versions
CN109660731B (en
Inventor
张学勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910007544.3A priority Critical patent/CN109660731B/en
Publication of CN109660731A publication Critical patent/CN109660731A/en
Application granted granted Critical
Publication of CN109660731B publication Critical patent/CN109660731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Abstract

This application discloses a kind of electronic equipment and mobile platform.Electronic equipment includes the multiple flight time components of ontology and setting on the body.Multiple flight time components are located at multiple and different orientation of ontology.Each flight time component includes optical transmitting set and optical receiver.Optical transmitting set is used to receive the laser pulse of the corresponding optical transmitting set transmitting of target subject reflection for emitting laser pulse, optical receiver to outside ontology.Optical transmitting set in multiple flight time components emits laser simultaneously, and the optical receiver in multiple flight time components exposes simultaneously, to obtain panoramic range image.In the electronic equipment and mobile platform of the application embodiment, multiple optical transmitting sets positioned at multiple and different orientation of ontology emit laser simultaneously, multiple optical receivers expose simultaneously, to obtain panoramic range image, can disposably get more comprehensive depth information.

Description

Electronic equipment and mobile platform
Technical field
This application involves image acquisition technologies, more specifically, are related to a kind of electronic equipment and mobile platform.
Background technique
In order to enable the function of electronic equipment is more diversified, depth image can be set on electronic equipment and obtained dress It sets, to obtain the depth image of target subject.However, current integrated phase shift range finding is merely able to obtain a direction or one Depth image in a angular range, the depth information got are less.
Summary of the invention
The application embodiment provides a kind of electronic equipment and mobile platform.
The electronic equipment of the application embodiment includes the multiple flight time components of ontology and setting on the body, Multiple flight time components are located at multiple and different orientation of the ontology, and each flight time component includes Optical transmitting set and optical receiver, for emitting laser pulse to outside the ontology, the optical receiver is used for the optical transmitting set Receive the laser pulse of the corresponding optical transmitting set transmitting of target subject reflection;In multiple flight time components The optical transmitting set simultaneously emit laser, the optical receiver in multiple flight time components exposes simultaneously, to obtain Take panoramic range image.
The mobile platform of the application embodiment includes the multiple flight time components of ontology and setting on the body, Multiple flight time components are located at multiple and different orientation of the ontology, and each flight time component includes Optical transmitting set and optical receiver, for emitting laser pulse to outside the ontology, the optical receiver is used for the optical transmitting set Receive the laser pulse of the corresponding optical transmitting set transmitting of target subject reflection;In multiple flight time components The optical transmitting set simultaneously emit laser, the optical receiver in multiple flight time components exposes simultaneously, to obtain Take panoramic range image.
In the electronic equipment and mobile platform of the application embodiment, multiple light positioned at multiple and different orientation of ontology are sent out Emitter emits laser simultaneously, and multiple optical receivers expose simultaneously, to obtain panoramic range image, can disposably get more Comprehensive depth information.
The additional aspect and advantage of presently filed embodiment will be set forth in part in the description, partially will be from following Description in become obvious, or recognized by the practice of presently filed embodiment.
Detailed description of the invention
The above-mentioned and/or additional aspect and advantage of the application is from combining in description of the following accompanying drawings to embodiment by change It obtains obviously and is readily appreciated that, in which:
Fig. 1 is the structural schematic diagram of the electronic equipment of the application certain embodiments;
Fig. 2 is the module diagram of the electronic equipment of the application certain embodiments;
Fig. 3 is the structural schematic diagram of the optical transmitting set of the flight time component of the application certain embodiments;
Fig. 4 is the application scenarios schematic diagram of the electronic equipment of the application certain embodiments;
Fig. 5 is the coordinate system schematic diagram of the initial depth image mosaic of the application certain embodiments;
Fig. 6 to Figure 10 is the application scenarios schematic diagram of the electronic equipment of the application certain embodiments;
Figure 11 to Figure 14 is the structural schematic diagram of the mobile platform of the application certain embodiments.
Specific embodiment
Presently filed embodiment is described further below in conjunction with attached drawing.Same or similar label is from beginning in attached drawing To the same or similar element of expression or element with the same or similar functions eventually.The application's described with reference to the accompanying drawing Embodiment is exemplary, and is only used for explaining presently filed embodiment, and should not be understood as the limitation to the application.
Also referring to Fig. 1 and Fig. 2, the electronic equipment 100 of the application embodiment includes ontology 10, flight time component 20, CCD camera assembly 30, microprocessor 40 and application processor 50.
Ontology 10 includes multiple and different orientation.As shown in figure 1, ontology 10 can have there are four different direction example, along side clockwise To successively are as follows: first orientation, second orientation, third orientation and fourth orientation, first orientation is opposite with third orientation, second orientation It is opposite with fourth orientation.First orientation is orientation corresponding with the right side of ontology 10, second orientation as and under ontology 10 The corresponding orientation in side, third orientation are the top of orientation corresponding with the left side of ontology 10, fourth orientation as with ontology 10 Corresponding orientation.
Flight time component 20 is arranged on ontology 10.The quantity of flight time component 20 can be multiple, multiple flights Time component 20 is located at multiple and different orientation of ontology 10.Specifically, the quantity of flight time component 20 can be four, point It Wei not flight time component 20a, flight time component 20b, flight time component 20c and flight time component 20d.Flight time Component 20a setting is arranged in second orientation, flight time component 20c in third in first orientation, the setting of flight time component 20b Orientation, flight time component 20d are arranged in fourth orientation.Certainly, the quantity of flight time component 20 may be eight (or its He is arbitrarily greater than two quantity, especially any quantity for being greater than four), first orientation, second orientation, third orientation and the Two (or other quantity) flight time components 20 can be respectively arranged in four orientation.The application embodiment is with flight time component 20 Quantity is illustrated for four, it will be understood that four flight time components 20 can be realized that obtain panoramic range image (complete Scape depth image refers to that the field angle of the panoramic range image is greater than or equal to 180 degree, for example, the visual field of panoramic range image Angle can be 180 degree, 240 degree, 360 degree, 480 degree, 720 degree etc.), and be conducive to saving electronic equipment 100 manufacturing cost, with And volume and power consumption for reducing electronic equipment 100 etc..The electronic equipment 100 of present embodiment can be when being provided with multiple flights Between component 20 the portable electronic devices such as mobile phone, tablet computer, laptop, at this point, ontology 10 can be handset, Tablet computer fuselage, laptop fuselage etc..
Each flight time component 20 includes optical transmitting set 22 and optical receiver 24.Optical transmitting set 22 is used for ontology 10 Outer transmitting laser pulse, optical receiver 24 are used to receive the laser arteries and veins that the corresponding optical transmitting set 22 of target subject reflection emits Punching.Specifically, flight time component 20a includes optical transmitting set 22a and optical receiver 24a, and flight time component 20b includes light hair Emitter 22b and optical receiver 24b, flight time component 20c include optical transmitting set 22c and optical receiver 24c, flight time component 20d includes optical transmitting set 22d and optical receiver 24d.Optical transmitting set 22a, optical transmitting set 22b, optical transmitting set 22c and optical transmitting set 22d is respectively used to emit laser pulse, light-receiving to the outer first orientation of ontology 10, second orientation, third orientation and fourth orientation Device 24a, optical receiver 24b, optical receiver 24c and optical receiver 24d are respectively used to receive the target subject reflection of first orientation Optical transmitting set 22a transmitting laser pulse, second orientation target subject reflection optical transmitting set 22b transmitting laser arteries and veins Punching, the laser pulse of optical transmitting set 22c transmitting of the target subject reflection in third orientation, the target subject of fourth orientation reflect The laser pulse of optical transmitting set 22d transmitting, so as to cover each different zones outside ontology 10, compared to existing needs It is rotated by 360 ° for could obtaining more comprehensive depth information, the electronic equipment 100 of present embodiment can not rotate can It is disposable to obtain more comprehensive depth information, it is rapid to execute simple and response speed.
Optical transmitting set 22 in multiple flight time components 20 emits laser simultaneously, corresponding, multiple flight time Optical receiver 24 in component 20 exposes simultaneously, to obtain panoramic range image.Specifically, optical transmitting set 22a, optical transmitting set 22b, optical transmitting set 22c and optical transmitting set 22d emit laser, optical receiver 24a, optical receiver 24b, optical receiver 24c simultaneously It is exposed simultaneously with optical receiver 24d.Since multiple optical transmitting sets 22 emit laser simultaneously, multiple optical receivers 24 expose simultaneously, It is obtained in the laser pulse and the received laser pulse of corresponding multiple optical receivers 24 emitted according to multiple optical transmitting sets 22 When to corresponding multiple initial depth images, multiple initial depth image timeliness having the same are able to reflect outside ontology 10 The picture that each orientation of synchronization is shown, the i.e. panoramic range image of synchronization.
The field angle of each optical transmitting set 22 and each optical receiver 24 is the arbitrary value in 80 degree~100 degree.Below It is illustrated by taking the field angle of optical receiver 24 as an example, the field angle of optical transmitting set 22 can be with the view of corresponding optical receiver 24 Rink corner is identical or approximately uniform, not repeated explanation herein.
In one embodiment, the visual field of optical receiver 24a, optical receiver 24b, optical receiver 24c and optical receiver 24d Angle is 80 degree.When the field angle of optical receiver 24 is no more than 80 degree, lens distortion is smaller, the initial depth image matter of acquisition Preferably, the panoramic range image quality obtained from is also preferable, and can get accurate depth information for amount.
In one embodiment, the visual field of optical receiver 24a, optical receiver 24b, optical receiver 24c and optical receiver 24d The sum of angle is equal to 360 degree.Specifically, the visual field of optical receiver 24a, optical receiver 24b, optical receiver 24c and optical receiver 24d Angle can be 90 degree, and four mutual field angles of optical receiver 24 are non-overlapping, obtain 360 degree or approximate to realize 360 degree of panoramic range image.Alternatively, the field angle of optical receiver 24a can be 100 for the field angle of 80 degree, optical receiver 24b Degree, optical receiver 24c field angle be 80 degree, the field angle of optical receiver 24d is 100 degree etc., four optical receivers 24 pass through Angled complimentary, which is realized, obtains 360 degree or approximate 360 degree of panoramic range image.
In one embodiment, the visual field of optical receiver 24a, optical receiver 24b, optical receiver 24c and optical receiver 24d The sum of angle is greater than 360 degree, and the mutual field angle of at least two optical receivers 24 in four optical receivers 24 is overlapping.Specifically Ground, optical receiver 24a, optical receiver 24b, optical receiver 24c and optical receiver 24d field angle can be 100 degree, four The field angle of optical receiver 24 between any two is mutually overlapping.When obtaining panoramic range image, four initial depths can be first identified Spend the edge overlapping part of image, then the panoramic range image for being 360 degree by four initial depth image mosaics.Due to four light The field angle of receiver 24 between any two is mutually overlapping, it can be ensured that the panoramic range image covering ontology 10 of acquisition is 360 degree outer Depth information.
Certainly, the specific value of the field angle of each optical receiver 24 (and each optical transmitting set 22) is not limited to above-mentioned act Example, those skilled in the art can according to need by the field angle of optical receiver 24 (and optical transmitting set 22) be set as 80 degree~ Any number between 100 degree, such as: the field angle of optical receiver 24 is 80 degree, 82 degree, 84 degree, 86 degree, 90 degree, 92 degree, 94 Degree, 96 degree, 98 degree, 100 degree or any arbitrary value between the two, the field angle of optical transmitting set 22 is 80 degree, 82 degree, 84 degree, 86 degree, 90 degree, 92 degree, 94 degree, 96 degree, 98 degree, 100 degree or any arbitrary value between the two, this is not restricted.
Referring to Fig. 3, each optical transmitting set 22 includes light source 222 and diffuser (diffuser) 224.Light source 222 is used for Emit laser, diffuser 224 is used to spread the laser of the transmitting of light source 222.
It is appreciated that there may be a degree of mutually relevant when the flight time component 20 of adjacent orientation works at the same time It disturbs.For example, optical transmitting set 22a transmitting laser pulse may be reflected by target subject after by optical receiver 2b and optical receiver 24d receive, optical transmitting set 22b transmitting laser pulse may be reflected by target subject after by optical receiver 24a and optical receiver 24c receive, optical transmitting set 22c transmitting laser pulse may be reflected by target subject after by optical receiver 24b and optical receiver 24d receive, optical transmitting set 22d transmitting laser pulse may be reflected by target subject after by optical receiver 24c and optical receiver 24a is received.Therefore, in order to avoid influencing caused by this interference, the accuracy of the depth information of acquisition is improved, adjacent orientation The wavelength of the laser pulse of optical transmitting set 22 (or its light source 222) transmitting can be different, in order to distinguish and calculate initial depth Image.
Specifically, it is assumed that the wavelength of the laser pulse of the optical transmitting set 22a transmitting of first orientation is λ 1, the light of second orientation The wavelength of the laser pulse of transmitter 22b transmitting is λ 2, and the wavelength of the laser pulse of the optical transmitting set 22c transmitting in third orientation is The wavelength of λ 3, the laser pulse of the optical transmitting set 22d transmitting of fourth orientation are λ 4, then meet 1 ≠ λ of λ 2,1 ≠ λ of λ 4,3 ≠ λ of λ 2, λ 3 ≠ λ 4, the wavelength for being the laser pulse of the transmitting of optical transmitting set 22 of adjacent orientation can be different.Wherein, λ 1 and λ 3 can phase Deng or differ, λ 2 and λ 4 can be equal or unequal.Preferably, each optical transmitting set 22 (or its light source 222) transmitting swashs The wavelength of light pulse is different, to further increase the accuracy of the depth information of acquisition.In other words, 1 ≠ λ of λ, 2 ≠ λ, 3 ≠ λ 4, Multiple flight time components 20 are not interfere with each other, and the calculating of respective initial depth image is also easy the most.In addition, each light connects Device 24 is received to be configured as receiving the laser pulse for the corresponding wavelength that corresponding optical transmitting set 22 emits.For example, optical receiver 24a is used The laser pulse for being λ 1 in the wavelength for receiving optical transmitting set 22a transmitting, the wavelength for being unable to receive optical transmitting set 22b transmitting are The laser pulse of λ 2, optical transmitting set 22c transmitting wavelength be the laser pulse of λ 3, optical transmitting set 22d transmitting wavelength be λ 4 Laser pulse.Similarly, the laser pulse that the wavelength that optical receiver 24b is only used for receiving optical transmitting set 22b transmitting is λ 2, light-receiving The laser pulse that the wavelength that device 24c is only used for receiving optical transmitting set 22c transmitting is λ 3, optical receiver 24d are only used for receiving light hair The laser pulse that the wavelength of emitter 22d transmitting is λ 4, is not unfolded explanation one by one herein.
By taking the laser pulse that optical transmitting set 22 emits is infrared light as an example, the wavelength of infrared light be 770 nanometers to 1 millimeter it Between, then λ 1 can be the arbitrary value between 770 nanometers~1000 nanometers, and λ 2 can be times between 1000 nanometers~1200 nanometers Meaning value, λ 3 can be the arbitrary value between 1200 nanometers~1400 nanometers, and λ 4 can be between 1400 nanometers~1600 nanometers Arbitrary value.Optical receiver 24a is used to receive the laser pulse that the wavelength of optical transmitting set 22a transmitting is 770 nanometers~1000 nanometers, Optical receiver 24b is used to receive the laser pulse that the wavelength of optical transmitting set 22b transmitting is 1000 nanometers~1200 nanometers, light-receiving Device 24c is used to receive the laser pulse that the wavelength of optical transmitting set 22c transmitting is 1200 nanometers~1400 nanometers, optical receiver 24d The laser pulse that wavelength for receiving optical transmitting set 22d transmitting is 1400 nanometers~1600 nanometers.
It should be pointed out that in addition to the wavelength difference of the laser pulse that emits adjacent orientation or each optical transmitting set 22 Outside, those skilled in the art can also using other modes avoid different flight time components 20 from working at the same time when mutually it Between interfere, this is not restricted;Alternatively, the lesser degree of interference can also be ignored, initial depth image is directly calculated; Alternatively, filtering out influence caused by the interference by certain algorithm process when calculating initial depth image.
Fig. 1 and Fig. 2 are please referred to, CCD camera assembly 30 is arranged on ontology 10.The quantity of CCD camera assembly 30 can be more It is a, the corresponding flight time component 20 of each CCD camera assembly 30.For example, when the quantity of flight time component 20 is four When, the quantity of CCD camera assembly 30 is also four, and four CCD camera assemblies 30 are separately positioned on the first orientation of ontology 10, second Orientation, third orientation and fourth orientation.
Multiple CCD camera assemblies 30 are connect with application processor 50.Each CCD camera assembly 30 is for acquiring target subject Scene image and export to application processor 50.In present embodiment, four CCD camera assemblies 30 are respectively used to acquisition first The scene image of the target subject in orientation, the scene image of the target subject of second orientation, third orientation target subject field Scape image, fourth orientation target subject scene image and exported respectively to application processor 50.It is appreciated that each camera shooting Head assembly 30 and the field angle of optical receiver 24 of corresponding flight time component 20 are identical or approximately uniform, so that each field Scape image can preferably be matched with corresponding initial depth image.
CCD camera assembly 30 can be visible image capturing head 32 or infrared pick-up head 34.When CCD camera assembly 30 When for visible image capturing head 32, scene image is visible images;When CCD camera assembly 30 is infrared pick-up head 34, scene Image is infrared light image.
Referring to Fig. 2, microprocessor 40 can be processing chip.The quantity of microprocessor 40 can be multiple, Mei Gewei Processor 40 corresponds to a flight time component 20.For example, in present embodiment, when the quantity of flight time component 20 is four A, the quantity of microprocessor 40 is also four.Each microprocessor 40 and the optical transmitting set in corresponding flight time component 20 22 and optical receiver 24 be all connected with.Each microprocessor 40 can drive the corresponding transmitting of optical transmitting set 22 to swash by driving circuit Light, and realize that multiple optical transmitting sets 22 emit laser simultaneously by the control of multi-microprocessor 40.Each microprocessor 40 is also The clock information of laser pulse is received so that optical receiver 24 exposes, and passes through more for providing to corresponding optical receiver 24 The control of a microprocessor 40 exposes while realizing multiple optical receiver 24.Multi-microprocessor 40 is also used to according to corresponding The laser pulse and the received laser pulse of optical receiver 24 that optical transmitting set 22 emits obtain initial depth image.For example, four Microprocessor 40 obtains just according to the laser pulse of optical transmitting set 22a transmitting and the received laser pulse of optical receiver 24a respectively Beginning depth image P1, it is obtained initially according to the laser pulse and the received laser pulse of optical receiver 24b of optical transmitting set 22b transmitting Depth image P2, initial depth is obtained according to the laser pulse and the received laser pulse of optical receiver 24c of optical transmitting set 22c transmitting Degree image P3, initial depth is obtained according to the laser pulse and the received laser pulse of optical receiver 24d of optical transmitting set 22d transmitting Image P4 (as shown in the upper part of Fig. 4).Each microprocessor 40 can also carry out tiled, distortion school to initial depth image Just, self calibration scheduling algorithm is handled, to improve the quality of initial depth image.
It is appreciated that the quantity of microprocessor 40 may be one, at this point, microprocessor 40 need successively to handle it is multiple The laser pulse and the received laser pulse of multiple optical receivers 24 that optical transmitting set 22 emits obtain initial depth image.It is multiple micro- For a microprocessor 40, processing speed faster, is delayed smaller processor 40.
Multi-microprocessor 40 is connect with application processor 50, by initial depth image transmitting to application processor 50.In one embodiment, microprocessor 40 can pass through mobile industry processor interface (Mobile Industry Processor Interface, MIPI) it is connect with application processor 50, specifically, microprocessor 40 passes through mobile industry processing The credible performing environment (Trusted Execution Environment, TEE) of device interface and application processor 50 connects, with Data (initial depth image) in microprocessor 40 are transmitted directly in credible performing environment, to improve electronic equipment 100 The safety of interior information.Wherein, the code in credible performing environment and region of memory are controlled by access control unit, It cannot be accessed by the program in untrusted performing environment (Rich Execution Environment, REE), credible execution ring Border and untrusted performing environment can be formed in application processor 50.
The system that application processor 50 can be used as electronic equipment 100.Application processor 50 can reset microprocessor 40, Wake up (wake) microprocessor 40, error correction (debug) microprocessor 40 etc..Application processor 50 can also be with electronic equipment 100 Multiple electronic components connect and control multiple electronic component and run in predetermined patterns, such as application processor 50 It connect with visible image capturing head 32 and infrared pick-up head 34, is shot with controlling visible image capturing head 32 and infrared pick-up head 34 Visible images and infrared light image, and handle the visible images and infrared light image;When electronic equipment 100 includes display screen When, application processor 50 can control display screen and show scheduled picture;Application processor 50 can be with controlling electronic devices 100 Antenna send or receive scheduled data etc..
Referring to Fig. 4, in one embodiment, application processor 50 is used for will be multiple according to the field angle of optical receiver 24 Multiple initial depth images that microprocessor 40 obtains synthesize a frame panoramic range image.
Specifically, it incorporated by reference to Fig. 1, is built using transversal line as X-axis by Y-axis of longitudinal axis using the center of ontology 10 as center of circle O Vertical rectangular coordinate system XOY, in rectangular coordinate system XOY, the visual field of optical receiver 24a (up time between 45 degree~315 degree Needle rotation, rear same), for the visual field of optical receiver 24b between 315 degree~225 degree, the visual field of optical receiver 24c is located at 225 degree Between~135 degree, the visual field of optical receiver 24d is between 135 degree~45 degree, then application processor 50 is according to four light-receivings The field angle of device 24 by initial depth image P1, initial depth image P2, initial depth image P3, initial depth image P4 successively It is spliced into the panoramic range image P1234 of 360 degree of a frame, so as to the use of depth information.
The laser pulse and optical receiver 24 that each microprocessor 40 emits according to corresponding optical transmitting set 22 are received In the initial depth image that laser pulse obtains, the depth information of each pixel is in the target subject and the orientation in corresponding orientation The distance between optical receiver 24.That is, the depth information of each pixel is the shot of first orientation in initial depth image P1 The distance between target and optical receiver 24a;The depth information of each pixel is the quilt of second orientation in initial depth image P2 Take the photograph the distance between target and optical receiver 24b;The depth information of each pixel is third orientation in initial depth image P3 The distance between target subject and optical receiver 24c;The depth information of each pixel is fourth orientation in initial depth image P4 Target subject and the distance between optical receiver 24d.It is being a frame 360 by multiple initial depth image mosaics in multiple orientation During the panoramic range image of degree, first have to the depth information of each pixel in each initial depth image being converted to system One changes depth information, and unitized depth information indicates each target subject in each orientation at a distance from some base position.It is deep After degree information is converted into unitized depth information, application processor 40 is facilitated to do initial depth image according to unitized depth information Splicing.
Specifically, a frame of reference is selected, the frame of reference can be with the optical receiver 24 in some orientation Image coordinate system is also possible to select other coordinate systems as the frame of reference as the frame of reference.By taking Fig. 5 as an example, with xo-yo-zoCoordinate system is benchmark coordinate system.Coordinate system x shown in fig. 5a-ya-zaFor the image coordinate system of optical receiver 24a, sit Mark system xb-yb-zbFor the image coordinate system of optical receiver 24b, coordinate system xc-yc-zcFor the image coordinate system of optical receiver 24c, Coordinate system xd-yd-zdFor the image coordinate system of optical receiver 24d.Application processor 50 is according to coordinate system xa-ya-zaIt is sat with benchmark Mark system xo-yo-zoBetween spin matrix and translation matrix the depth information of each pixel in initial depth image P1 is converted to Unitized depth information, according to coordinate system xb-yb-zbWith frame of reference xo-yo-zoBetween spin matrix and translation matrix will The depth information of each pixel is converted to unitized depth information in initial depth image P2, according to coordinate system xc-yc-zcWith base Conventional coordinates xo-yo-zoBetween spin matrix and translation matrix by the depth information of each pixel in initial depth image P3 turn It is changed to unitized depth information;According to coordinate system xd-yd-zdWith frame of reference xo-yo-zoBetween spin matrix and translation square The depth information of each pixel in initial depth image P4 is converted to unitized depth information by battle array.
After the completion of depth information conversion, multiple initial depth images are located under a unified frame of reference, and each Corresponding coordinate (the x of one pixel of initial depth imageo,yo,zo), then initial depth can be done by coordinate matching The splicing of image.For example, some pixel P in initial depth image P1aCoordinate be (xo1,yo1,zo1), initial deep Spend some pixel P in image P2bCoordinate be also (xo1,yo1,zo1), due to PaAnd PbUnder the current frame of reference Coordinate value having the same, then pixels illustrated point PaWith pixel PbIt is actually the same point, initial depth image P1 and initial When depth image P2 splices, pixel PaIt needs and pixel PbIt is overlapped.In this way, application processor 50 can pass through of coordinate The splicing of multiple initial depth images is carried out with relationship, and obtains 360 degree of panoramic range image.
It should be noted that the splicing that the matching relationship based on coordinate carries out initial depth image requires initial depth image Resolution ratio need be greater than a default resolution ratio.If being appreciated that the resolution ratio of initial depth image is lower, coordinate (xo,yo,zo) accuracy also can be relatively low, at this point, directly being matched according to coordinate, in fact it could happen that PaPoint and PbPoint is practical On be not overlapped, but differ an offset offset, and the value of offset be more than error bounds limit value the problem of.If image Resolution ratio it is higher, then coordinate (xo,yo,zo) accuracy also can be relatively high, at this point, directly being matched according to coordinate, i.e., Make PaPoint and PbPoint is practically without coincidence, differs an offset offset, but the value of offset can also be less than bouds on error Value will not influence too much the splicing of initial depth image that is, in the range of error permission.
It is appreciated that subsequent implementation mode can be used aforesaid way by two or more initial depth images into Row splicing or synthesis, no longer illustrate one by one.
Multiple initial depth images can also be synthesized three-dimensional with corresponding multiple visible images by application processor 50 Scene image is watched with being shown for user.For example, multiple visible images are respectively visible images V1, visible light figure As V2, visible images V3 and visible images V4.Then application processor 50 is respectively by initial depth image P1 and visible light figure It synthesized as V1 synthesis, by initial depth image P2 with visible images V2, close initial depth image P3 and visible images V3 At, by initial depth image P4 and visible images V4, then by four images after synthesis spliced to obtain 360 degree of a frame Three-dimensional scene images.Alternatively, application processor 50 is first by initial depth image P1, initial depth image P2, initial depth image P3 and initial depth image P4 splices to obtain the panoramic range image of 360 degree of a frame, and will be seen that light image V1, visible images V2, visible images V3 and visible images V4 splice to obtain the panorama visible images of 360 degree of a frame;Again by panorama depth map Three-dimensional scene images as synthesizing 360 degree with panorama visible images.
Referring to Fig. 6, in one embodiment, application processor 50 is multiple for being obtained according to multi-microprocessor 40 Multiple scene images that initial depth image and multiple CCD camera assemblies 30 acquire identify target subject.
Specifically, when scene image be infrared light image when, multiple infrared light images can be respectively infrared light image I1, Infrared light image I2, infrared light image I3 and infrared light image I4.Application processor 50 respectively according to initial depth image P1 and Infrared light image I1 identifies the target subject of first orientation, identifies second party according to initial depth image P2 and infrared light image I2 The target subject of position identifies the target subject in third orientation according to initial depth image P3 and infrared light image I3, according to initial The target subject of depth image P4 and infrared light image I4 identification fourth orientation.It is multiple when scene image is visible images Visible images are visible images V1, visible images V2, visible images V3 and visible images V4 respectively.Using processing Device 50 is respectively according to the target subject of initial depth image P1 and visible images V1 identification first orientation, according to initial depth figure It is identified as the target subject of P2 and visible images V2 identification second orientation, according to initial depth image P3 and visible images V3 The target subject in third orientation, the target subject that fourth orientation is identified according to initial depth image P4 and visible images V4.
When identifying target subject is to carry out recognition of face, application processor 50 is using infrared light image as scene image It is higher to carry out recognition of face accuracy.Application processor 50 carries out recognition of face according to initial depth image and infrared light image Process can be as follows:
Firstly, carrying out Face datection according to infrared light image determines target human face region.Since infrared light image includes The detailed information of scene can carry out Face datection according to infrared light image, to detect after getting infrared light image It whether include out face in infrared light image.If in infrared light image including face, extract in infrared light image where face Target human face region.
Then, In vivo detection processing is carried out to target human face region according to initial depth image.Due to each initial depth Image and infrared light image are corresponding, include the depth information of corresponding infrared light image in initial depth image, therefore, Depth information corresponding with target human face region can be obtained according to initial depth image.Further, since living body faces are Three-dimensional, and the face of the display such as picture, screen is then plane, it therefore, can be according to the target human face region of acquisition Depth information judge that target human face region is three-dimensional or plane, to carry out In vivo detection to target human face region.
If In vivo detection success, obtains the corresponding target face property parameters of target human face region, and according to target person Face property parameters carry out face matching treatment to the target human face region in infrared light image, obtain face matching result.Target Face character parameter refers to the parameter that can characterize the attribute of target face, can be to target person according to target face property parameters Face carries out identification and matching treatment.Target face property parameters include but is not limited to be face deflection angle, face luminance parameter, Face parameter, skin quality parameter, geometrical characteristic parameter etc..Electronic equipment 100 can be stored in advance joins for matched face character Number.After getting target face property parameters, so that it may by target face property parameters and pre-stored face character Parameter is compared.If target face property parameters are matched with pre-stored face character parameter, recognition of face passes through.
It should be pointed out that application processor 50 carries out the tool of recognition of face according to initial depth image and infrared light image Body process is not limited to this, such as application processor 50 can also detect facial contour according to initial depth visual aids, to mention High recognition of face precision etc..Application processor 50 according to initial depth image and visible images carry out the process of recognition of face with Application processor 50 is similar with the infrared light image progress process of recognition of face according to initial depth image, no longer separately explains herein It states.
Fig. 6 and Fig. 7 are please referred to, application processor 50 is also used to according to multiple initial depth images and multiple scene images Identify target subject failure when, according to the field angle of optical receiver 24 by least two microprocessors 40 obtain at least two at the beginning of Beginning range image integration is that a frame merges depth image, at least two scene images that at least two CCD camera assemblies 30 are acquired It synthesizes a frame and merges scene image, and identify target subject according to merging depth image and merging scene image.
Specifically, in Fig. 6 and embodiment shown in Fig. 7, due to the view of the optical receiver 24 of each flight time component 20 Rink corner is limited, it is understood that there may be the half of face be located at initial depth image P2, the other half be located at the situation of initial depth image P3, Initial depth image P2 and initial depth image P3 are synthesized a frame and merge depth image P23 by application processor 50, and corresponding Infrared light image I2 and infrared light image I3 (or visible images V2 and visible images V3) are synthesized into a frame and merge scene Image I23 (or V23), to identify target subject according to merging depth image P23 and merging scene image I23 (or V23) again.
It is appreciated that application processor 50 can be with when target subject is distributed in more initial depth images simultaneously More initial depth images (corresponding different direction) are synthesized into a frame and merge depth image, and is corresponding by more infrared lights Image (corresponding different direction) or visible images (corresponding different direction) synthesize a frame and merge scene image, to re-recognize Target subject.
Fig. 8 and Fig. 9 are please referred to, in one embodiment, application processor 50 according to multiple initial depth images for sentencing Disconnected the distance between target subject and electronic equipment 100 variation.
Specifically, each optical transmitting set 22 can repeatedly emit laser, and accordingly, each optical receiver 24 can repeatedly expose Light.For example, optical transmitting set 22a, optical transmitting set 22b, optical transmitting set 22c, optical transmitting set 22d emit laser, light at the first moment Receiver 24a, optical receiver 24b, optical receiver 24c and optical receiver 24d exposure, multi-microprocessor 40 is corresponding to be obtained initially Depth image P11, initial depth image P21, initial depth image P31, initial depth image P41;At the second moment, light emitting Device 22a, optical transmitting set 22b, optical transmitting set 22c, optical transmitting set 22d emit laser, optical receiver 24a, optical receiver 24b, light Receiver 24c and optical receiver 24d exposure, the correspondence of multi-microprocessor 40 obtain initial depth image P12, initial depth image P22, initial depth image P32, initial depth image P42.Then, application processor 50 is respectively according to initial depth image P11 Judge that the distance between target subject and electronic equipment 100 of first orientation changes with initial depth image P12;According to initial depth Degree image P21 and initial depth image P22 judges that the distance between target subject and electronic equipment 100 of second orientation change; Judged between the target subject in third orientation and electronic equipment 100 according to initial depth image P31 and initial depth image P32 Distance change;The target subject and electronic equipment of fourth orientation are judged according to initial depth image P41 and initial depth image P42 The distance between 100 variations.
It is appreciated that due to include in initial depth image target subject depth information, application processor 50 Can be changed according to the depth information at multiple continuous moment between the target subject and electronic equipment 100 that judge corresponding orientation away from From variation.
Referring to Fig. 10, application processor 50 is also used to judging that distance change fails according to multiple initial depth images When, it is synthesized according at least two initial depth images that the field angle of optical receiver 24 obtains at least two microprocessors 40 One frame merges depth image, and application processor 50 continuously performs synthesis step to obtain multiframe and continuously merge depth image, and Merge depth image according to multiframe and judges distance change.
Specifically, in embodiment shown in Fig. 10, due to the field angle of the optical receiver 24 of each flight time component 20 It is limited, it is understood that there may be the half of face be located at initial depth image P21, the other half be located at the situation of initial depth image P31, answer The initial depth image P21 at the first moment and initial depth image P31 are synthesized into a frame with processor 50 and merge depth image P231, and correspond to and the initial depth image P22 and initial depth image P32 at the second moment are synthesized into frame merging depth image Then P232 merges depth image P231 and P232 according to this two frame after merging and rejudges distance change.
It is appreciated that application processor 50 can be with when target subject is distributed in more initial depth images simultaneously More initial depth images (corresponding different direction) are synthesized into a frame and merge depth image, and are continuously held for multiple moment The row synthesis step.
Referring to Fig. 9, when judging that distance change reduces for distance according to multiple initial depth images, or according to multiframe When merging depth image judges that distance change reduces for distance, application processor 50 can be improved to be passed from least one microprocessor 40 The frame per second to judge the initial depth image of distance change is acquired in defeated multiple initial depth images.
It is appreciated that electronic equipment 100 can not prejudge when the distance between target subject and electronic equipment 100 reduce The distance, which reduces, whether there is risk, and therefore, application processor 50 can be improved from the more of the transmission of at least one microprocessor 40 The frame per second to judge the initial depth image of distance change is acquired in a initial depth image, it should be away from closer concern From variation.Specifically, when judging that the corresponding distance in some orientation reduces, the orientation is can be improved from Wei Chu in application processor 50 The frame per second to judge the initial depth image of distance change is acquired in multiple initial depth images that reason device 40 transmits.
For example, multi-microprocessor 40 obtains initial depth image P11, initial depth image respectively at the first moment P21, initial depth image P31, initial depth image P41;At the second moment, multi-microprocessor 40 obtains initial depth respectively Image P12, initial depth image P22, initial depth image P32, initial depth image P42;At the third moment, multiple micro processs Device 40 obtains initial depth image P13, initial depth image P23, initial depth image P33, initial depth image P43 respectively; At the 4th moment, multi-microprocessor 40 obtains initial depth image P14, initial depth image P24, initial depth image respectively P34, initial depth image P44.
Under normal circumstances, the selection of application processor 50 initial depth image P11 and initial depth image P14 judges first The variation of the distance between the target subject in orientation and electronic equipment 100;Choose initial depth image P21 and initial depth image P24 judges that the distance between target subject and electronic equipment 100 of second orientation change;Choose initial depth image P31 and just Beginning depth image P34 judges that the distance between target subject and the electronic equipment 100 in third orientation change;Choose initial depth figure As P41 and initial depth image P44 judge that the distance between target subject and electronic equipment 100 of fourth orientation changes.Using Processor 50 is to acquire a frame at interval of two frames in the frame per second of each orientation acquisition initial depth image, i.e., every three frame chooses one Frame.
When judging that the corresponding distance of first orientation reduces according to initial depth image P11 and initial depth image P14, Application processor 50 can then choose initial depth image P11 and initial depth image P13 judge the target subject of first orientation with The variation of the distance between electronic equipment 100.The frame per second that application processor 50 acquires the initial depth image of first orientation becomes every It is spaced a frame and acquires a frame, i.e., every two frame chooses a frame.And the frame per second in other orientation remains unchanged, i.e., application processor 50 still selects Initial depth image P21 and initial depth image P24 is taken to judge distance change;Choose initial depth image P31 and initial depth Image P34 judges distance change;It chooses initial depth image P41 and initial depth image P44 and judges distance change.
When judging that the corresponding distance of first orientation reduces according to initial depth image P11 and initial depth image P14, together When according to initial depth image P21 and initial depth image P24 judge second orientation it is corresponding distance reduce when, using processing Device 50 can then choose initial depth image P11 and initial depth image P13 judges the target subject and electronic equipment of first orientation The target subject that initial depth image P21 and initial depth image P23 judges second orientation is chosen in the distance between 100 variations The variation of the distance between electronic equipment 100, application processor 50 acquire the initial depth image of first orientation and second orientation Frame per second become acquiring a frame at interval of a frame, i.e. every two frame chooses a frame.And the frame per second in other orientation remains unchanged, that is, applies Processor 50 still chooses initial depth image P31 and initial depth image P34 judges that the target subject in third orientation is set with electronics Standby the distance between 100 variation;Choose the mesh shot that initial depth image P41 and initial depth image P44 judges fourth orientation The variation of the distance between mark and electronic equipment 100.
Certainly, application processor 50 can also be improved when judging that the corresponding distance in any one orientation reduces from each The frame per second to judge the initial depth image of distance change is acquired in multiple initial depth images that microprocessor 40 transmits. That is: when the target subject and electronic equipment for judging first orientation according to initial depth image P11 and initial depth image P14 When the distance between 100 reduction, application processor 50 can then choose initial depth image P11 and initial depth image P13 judgement Initial depth image P21 and initial depth figure are chosen in the variation of the distance between the target subject of first orientation and electronic equipment 100 As P23 judge the distance between target subject and electronic equipment 100 of second orientation variation, choose initial depth image P31 and Initial depth image P33 judges the variation of the distance between target subject and the electronic equipment 100 in third orientation and chooses initial deep Degree image P41 and initial depth image P43 judges that the distance between target subject and electronic equipment 100 of fourth orientation change.
Application processor 50 can also judge the distance in conjunction with visible images or infrared light image when distance reduces Variation.Specifically, application processor 50 first identifies target subject according to visible images or infrared light image, then further according to more The initial depth image at a moment judges distance change, to set for different target subjects from different distance controlling electronics Standby 100 execute different operations.Alternatively, the control of microprocessor 40 improves the corresponding transmitting of optical transmitting set 22 and swashs when distance reduces The frequency etc. that light and optical receiver 24 expose.
It should be noted that the electronic equipment 100 of present embodiment is also used as an external terminal, be fixedly mounted or It is removably mounted on the portable electronic device such as mobile phone, tablet computer, laptop outside, can also be fixedly mounted It is used in the loose impediments such as vehicle body (as shown in Figure 7 and Figure 8), unmanned aerial vehicle body, robot body or ship ontology. When specifically used, when electronic equipment 100 synthesizes a frame panoramic range image according to multiple initial depth images as previously described, entirely Scape depth image can be used for three-dimensional modeling, immediately positioning and map structuring (simultaneous localization and Mapping, SLAM), augmented reality shows.When the identification target subject as previously described of electronic equipment 100, then can be applied to portable Recognition of face unlock, the payment of formula electronic device, or applied to the avoidance of robot, vehicle, unmanned plane, ship etc..Work as electronics When equipment 100 judges the variation of the distance between target subject and electronic equipment 100 as previously described, then it can be applied to robot, vehicle , automatic runnings, the object tracking such as unmanned plane, ship etc..
Fig. 2 and Figure 11 are please referred to, the application embodiment also provides a kind of mobile platform 300.Mobile platform 300 includes this Body 10 and the multiple flight time components 20 being arranged on ontology 10.Multiple flight time components 20 are located at the more of ontology 10 A different direction.Each flight time component 20 includes optical transmitting set 22 and optical receiver 24.Optical transmitting set 22 is used for this Emit laser pulse outside body 10, optical receiver 24 is used to receive the laser that the corresponding optical transmitting set 22 of target subject reflection emits Pulse.Optical transmitting set 22 in multiple flight time components 20 emits laser simultaneously, and the light in multiple flight time components 20 connects It receives device 24 to expose simultaneously, to obtain panoramic range image.
Specifically, ontology 10 can be vehicle body, unmanned aerial vehicle body, robot body or ship ontology.
Figure 11 is please referred to, when ontology 10 is vehicle body, the quantity of multiple flight time components 20 is four, and four fly Row time component 20 is separately mounted to four sides of vehicle body, for example, on the left of headstock, the tailstock, vehicle body, on the right side of vehicle body.Vehicle sheet Body can drive multiple flight time components 20 to move on road, construct 360 degree of panoramic range images in travelling route, with As Reference Map etc.;Or the initial depth image in multiple and different orientation is obtained, to identify target subject, judge target subject The variation of the distance between mobile platform 300, thus control vehicle body acceleration, slow down, stop, detour etc., realize that nobody drives Avoidance is sailed, for example, target subject reduces at a distance from vehicle and target subject is if recognizing in vehicle when moving on road Pit on road, then vehicle is slowed down with the first acceleration, is reduced at a distance from vehicle if recognizing target subject and is shot mesh It is designated as people, then vehicle is slowed down with the second acceleration, wherein absolute value of the absolute value of the first acceleration less than the second acceleration.Such as This, executes different operations according to different target subjects when distance reduces, vehicle can be made more intelligent.
Figure 12 is please referred to, when ontology 10 is unmanned aerial vehicle body, the quantity of multiple flight time components 20 is four, four Flight time component 20 is separately mounted to four side of front, rear, left and right of unmanned aerial vehicle body, or is mounted on unmanned aerial vehicle body and takes Four side of front, rear, left and right of the holder of load.Unmanned aerial vehicle body can drive multiple flight time components 20 to fly in the sky, with into Row takes photo by plane, inspection etc., and the panoramic range image that unmanned plane can will acquire is returned to ground control terminal, can also directly carry out SLAM. Multiple flight time components 20 can realize unmanned plane acceleration, deceleration, stopping, avoidance, object tracking.
Figure 13 is please referred to, when ontology 10 is robot body, such as sweeping robot, multiple flight time components 20 Quantity is four, and four flight time components 20 are separately mounted to four side of front, rear, left and right of robot body.Robot body Multiple flight time components 20 can be driven to move at home, the initial depth image in multiple and different orientation is obtained, to identify quilt It takes the photograph target, judge target subject and the variation of the distance between mobile platform 300 to control robot body movement realizes machine Device people removes rubbish, avoidance etc..
Figure 14 is please referred to, when ontology 10 is ship ontology, the quantity of multiple flight time components 20 is four, and four fly Row time component 20 is separately mounted to four side of front, rear, left and right of ship ontology.Ship ontology can drive flight time component 20 movements, obtain the initial depth image in multiple and different orientation, to accurately know in adverse circumstances (such as under the environment that hazes) Other target subject judges that the distance between target subject and mobile platform 300 change, and improves sea going safety etc..
The mobile platform 300 of the application embodiment be can movable independently platform, multiple flight time components 20 pacify On the ontology 10 of mobile platform 300, to obtain panoramic range image.And the electronic equipment of the application embodiment 100 Body generally can not be moved independently, and electronic equipment 100 can further be equipped on the dress that can be moved similar to mobile platform 300 etc. It sets, so that the device be helped to obtain panoramic range image.
It should be pointed out that it is above-mentioned to the ontology 10 of electronic equipment 100, it is flight time component 20, CCD camera assembly 30, micro- The explanation of processor 40 and application processor 50 is equally applicable to the mobile platform 300 of the application embodiment, herein not Repeat explanation.
Although embodiments herein has been shown and described above, it is to be understood that above-described embodiment is example Property, it should not be understood as the limitation to the application, those skilled in the art within the scope of application can be to above-mentioned Embodiment is changed, modifies, replacement and variant, and scope of the present application is defined by the claims and their equivalents.

Claims (12)

1. a kind of electronic equipment, which is characterized in that the electronic equipment includes:
Ontology;With
Multiple flight time components on the body are set, and multiple flight time components are located at the ontology Multiple and different orientation, each flight time component include optical transmitting set and optical receiver, the optical transmitting set be used for Emit laser pulse outside the ontology, the optical receiver is used to receive the corresponding optical transmitting set hair of target subject reflection The laser pulse penetrated;
The optical transmitting set in multiple flight time components emits laser simultaneously, in multiple flight time components The optical receiver exposes simultaneously, to obtain panoramic range image.
2. electronic equipment according to claim 1, which is characterized in that the optical transmitting set of adjacent orientation emits described The wavelength of laser pulse is different.
3. electronic equipment according to claim 1, which is characterized in that the flight time component includes four, Mei Gesuo The field angle for stating optical transmitting set and each optical receiver is arbitrary value in 80 degree~100 degree.
4. electronic equipment according to claim 1, which is characterized in that the laser arteries and veins of each optical transmitting set transmitting The wavelength of punching is different.
5. electronic equipment according to claim 1, which is characterized in that the electronic equipment further includes application processor and more A microprocessor, the corresponding flight time component of each microprocessor, multiple microprocessors with it is described Application processor connection, each microprocessor are used to be sent out according to the optical transmitting set of the corresponding flight time component The received laser pulse of the laser pulse and the optical receiver penetrated obtains initial depth image and is transmitted to described Application processor;What the application processor was used to be obtained multiple microprocessors according to the field angle of the optical receiver Multiple initial depth images synthesize panoramic range image described in a frame.
6. electronic equipment according to claim 1, which is characterized in that the electronic equipment further includes application processor and more A microprocessor, the corresponding flight time component of each microprocessor, multiple microprocessors with it is described Application processor connection, each microprocessor are used to be sent out according to the optical transmitting set of the corresponding flight time component The received laser pulse of the laser pulse and the optical receiver penetrated obtains initial depth image and is transmitted to described Application processor;
The electronic equipment further includes the multiple CCD camera assemblies of setting on the body, and each CCD camera assembly is corresponding One flight time component, multiple CCD camera assemblies are connect with the application processor, each camera Component is used to acquire the scene image of the target subject and exports to the application processor;
The multiple initial depth images and multiple institutes that the application processor is used to be obtained according to multiple microprocessors The multiple scene images for stating CCD camera assembly acquisition identify the target subject.
7. electronic equipment according to claim 6, which is characterized in that the application processor is also used to according to multiple institutes When stating initial depth image and multiple scene image identifications target subject failure, according to the visual field of the optical receiver At least two initial depth images that at least two microprocessors obtain are synthesized a frame and merge depth image by angle, At least two scene images of at least two CCD camera assembly acquisitions are synthesized into a frame and merge scene image, and root The target subject is identified according to the merging depth image and the merging scene image.
8. electronic equipment according to claim 1, which is characterized in that the electronic equipment further includes application processor and more A microprocessor, the corresponding flight time component of each microprocessor, multiple microprocessors with it is described Application processor connection, each microprocessor are used to be sent out according to the optical transmitting set of the corresponding flight time component Repeatedly the received laser pulse obtains multiple initial depth images and passes the laser pulse and the optical receiver penetrated Transport to the application processor;The application processor is used to judge the target subject according to multiple initial depth images The variation of the distance between described electronic equipment.
9. electronic equipment according to claim 8, which is characterized in that the application processor is also used to according to multiple institutes State initial depth image judge the distance change failure when, it is described micro- by least two according to the field angle of the optical receiver At least two initial depth images that processor obtains synthesize a frame and merge depth image, and the application processor is continuous Synthesis step is executed to obtain the multiframe continuously merging depth image, and the merging depth image according to multiframe judges institute State distance change.
10. electronic equipment according to claim 8 or claim 9, which is characterized in that the application processor is also used to judging When stating distance change as apart from reduction, improve from multiple initial depth images that microprocessor described at least one transmits Acquire the frame per second to judge the initial depth image of the distance change.
11. a kind of mobile platform, which is characterized in that the mobile platform includes:
Ontology;With
Multiple flight time components on the body are set, and multiple flight time components are located at the ontology Multiple and different orientation, each flight time component include optical transmitting set and optical receiver, the optical transmitting set be used for Emit laser pulse outside the ontology, the optical receiver is used to receive the corresponding optical transmitting set hair of target subject reflection The laser pulse penetrated;
The optical transmitting set in multiple flight time components emits laser simultaneously, in multiple flight time components The optical receiver exposes simultaneously, to obtain panoramic range image.
12. mobile platform according to claim 11, which is characterized in that the ontology be vehicle body, unmanned aerial vehicle body, Robot body or ship ontology.
CN201910007544.3A 2019-01-04 2019-01-04 Electronic equipment and mobile platform Active CN109660731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910007544.3A CN109660731B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910007544.3A CN109660731B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Publications (2)

Publication Number Publication Date
CN109660731A true CN109660731A (en) 2019-04-19
CN109660731B CN109660731B (en) 2021-04-23

Family

ID=66118764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910007544.3A Active CN109660731B (en) 2019-01-04 2019-01-04 Electronic equipment and mobile platform

Country Status (1)

Country Link
CN (1) CN109660731B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673647A (en) * 2019-11-07 2020-01-10 深圳市道通智能航空技术有限公司 Omnidirectional obstacle avoidance method and unmanned aerial vehicle
CN112087575A (en) * 2020-08-24 2020-12-15 广州启量信息科技有限公司 Virtual camera control method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014780A1 (en) * 2008-07-16 2010-01-21 Kalayeh Hooshmand M Image stitching and related method therefor
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102129550A (en) * 2011-02-17 2011-07-20 华南理工大学 Scene perception method
CN106371281A (en) * 2016-11-02 2017-02-01 辽宁中蓝电子科技有限公司 Multi-module 360-degree space scanning and positioning 3D camera based on structured light
CN106461783A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Automatic multiple depth cameras synchronization using time sharing
US9653874B1 (en) * 2011-04-14 2017-05-16 William J. Asprey Trichel pulse energy devices
CN107263480A (en) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 A kind of robot manipulation's method and robot
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
US20180139431A1 (en) * 2012-02-24 2018-05-17 Matterport, Inc. Capturing and aligning panoramic image and depth data
CN108122191A (en) * 2016-11-29 2018-06-05 成都观界创宇科技有限公司 Fish eye images are spliced into the method and device of panoramic picture and panoramic video
CN108200315A (en) * 2017-12-29 2018-06-22 合肥泰禾光电科技股份有限公司 A kind of depth camera and depth camera system
CN108471487A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 Generate the image device and associated picture device of panoramic range image
CN108541304A (en) * 2015-04-29 2018-09-14 苹果公司 Flight time depth map with flexible scan pattern
CN108616703A (en) * 2018-04-23 2018-10-02 Oppo广东移动通信有限公司 Electronic device and its control method, computer equipment and readable storage medium storing program for executing
CN108873222A (en) * 2018-08-22 2018-11-23 Oppo广东移动通信有限公司 Laser projection device, TOF depth camera and electronic equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014780A1 (en) * 2008-07-16 2010-01-21 Kalayeh Hooshmand M Image stitching and related method therefor
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102129550A (en) * 2011-02-17 2011-07-20 华南理工大学 Scene perception method
US9653874B1 (en) * 2011-04-14 2017-05-16 William J. Asprey Trichel pulse energy devices
US20180139431A1 (en) * 2012-02-24 2018-05-17 Matterport, Inc. Capturing and aligning panoramic image and depth data
CN106461783A (en) * 2014-06-20 2017-02-22 高通股份有限公司 Automatic multiple depth cameras synchronization using time sharing
CN108541304A (en) * 2015-04-29 2018-09-14 苹果公司 Flight time depth map with flexible scan pattern
CN106371281A (en) * 2016-11-02 2017-02-01 辽宁中蓝电子科技有限公司 Multi-module 360-degree space scanning and positioning 3D camera based on structured light
CN108122191A (en) * 2016-11-29 2018-06-05 成都观界创宇科技有限公司 Fish eye images are spliced into the method and device of panoramic picture and panoramic video
CN108471487A (en) * 2017-02-23 2018-08-31 钰立微电子股份有限公司 Generate the image device and associated picture device of panoramic range image
CN107263480A (en) * 2017-07-21 2017-10-20 深圳市萨斯智能科技有限公司 A kind of robot manipulation's method and robot
CN107742296A (en) * 2017-09-11 2018-02-27 广东欧珀移动通信有限公司 Dynamic image generation method and electronic installation
CN108200315A (en) * 2017-12-29 2018-06-22 合肥泰禾光电科技股份有限公司 A kind of depth camera and depth camera system
CN108616703A (en) * 2018-04-23 2018-10-02 Oppo广东移动通信有限公司 Electronic device and its control method, computer equipment and readable storage medium storing program for executing
CN108873222A (en) * 2018-08-22 2018-11-23 Oppo广东移动通信有限公司 Laser projection device, TOF depth camera and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673647A (en) * 2019-11-07 2020-01-10 深圳市道通智能航空技术有限公司 Omnidirectional obstacle avoidance method and unmanned aerial vehicle
CN110673647B (en) * 2019-11-07 2022-05-03 深圳市道通智能航空技术股份有限公司 Omnidirectional obstacle avoidance method and unmanned aerial vehicle
CN112087575A (en) * 2020-08-24 2020-12-15 广州启量信息科技有限公司 Virtual camera control method
CN112087575B (en) * 2020-08-24 2022-03-08 广州启量信息科技有限公司 Virtual camera control method

Also Published As

Publication number Publication date
CN109660731B (en) 2021-04-23

Similar Documents

Publication Publication Date Title
CN109862275A (en) Electronic equipment and mobile platform
EP1504597B1 (en) Method for displaying an output image on an object
CN108027441A (en) Mixed mode depth detection
CN106371281A (en) Multi-module 360-degree space scanning and positioning 3D camera based on structured light
CN109618108A (en) Electronic equipment and mobile platform
US20210407205A1 (en) Augmented reality eyewear with speech bubbles and translation
CN110572630A (en) Three-dimensional image shooting system, method, device, equipment and storage medium
EP4172733A1 (en) Augmented reality eyewear 3d painting
US11030793B2 (en) Stylized image painting
US20220084303A1 (en) Augmented reality eyewear with 3d costumes
CN109688400A (en) Electronic equipment and mobile platform
CN109660731A (en) Electronic equipment and mobile platform
US20210306608A1 (en) Multi-dimensional rendering
CN109803089A (en) Electronic equipment and mobile platform
CN109587304A (en) Electronic equipment and mobile platform
CN206378680U (en) 3D cameras based on 360 degree of spacescans of structure light multimode and positioning
CN109618085A (en) Electronic equipment and mobile platform
CN109788172A (en) Electronic equipment and mobile platform
US20210406542A1 (en) Augmented reality eyewear with mood sharing
CN109788195A (en) Electronic equipment and mobile platform
CN109660733A (en) Electronic equipment and mobile platform
CN109587303A (en) Electronic equipment and mobile platform
CN109729250A (en) Electronic equipment and mobile platform
CN109660732A (en) Electronic equipment and mobile platform
CN109788196A (en) Electronic equipment and mobile platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant