CN111505836B - Electronic equipment of three-dimensional formation of image - Google Patents

Electronic equipment of three-dimensional formation of image Download PDF

Info

Publication number
CN111505836B
CN111505836B CN202010606925.6A CN202010606925A CN111505836B CN 111505836 B CN111505836 B CN 111505836B CN 202010606925 A CN202010606925 A CN 202010606925A CN 111505836 B CN111505836 B CN 111505836B
Authority
CN
China
Prior art keywords
sub
image
power
target object
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010606925.6A
Other languages
Chinese (zh)
Other versions
CN111505836A (en
Inventor
梁栋
张�成
李天磊
刘嵩
罗志通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vertilite Co Ltd
Original Assignee
Vertilite Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vertilite Co Ltd filed Critical Vertilite Co Ltd
Priority to CN202010606925.6A priority Critical patent/CN111505836B/en
Publication of CN111505836A publication Critical patent/CN111505836A/en
Application granted granted Critical
Publication of CN111505836B publication Critical patent/CN111505836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/50Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images the image being built up from image elements distributed over a 3D volume, e.g. voxels

Abstract

The invention provides a three-dimensional imaging electronic device, comprising: a light emitting end including a light emitter and a first actuator; an emergent light beam emitted by the light emitter is irradiated on a target object through the first lens group and the diffractive optical element, and the emergent light beam is reflected by the target object to form a reflected light beam; a light receiving end comprising a receiver and a second actuator; a second lens group is arranged on the second actuator, the reflected light beam enters the receiver through the second lens group, and the receiver forms a sensing signal according to the reflected light beam; the analysis end is connected with the optical receiving end, receives the induction signal and generates initial depth data; and the processing end is connected with the analysis end, and forms an initial three-dimensional image of the target object according to the initial depth data. The three-dimensional imaging electronic equipment provided by the invention can optimize the local part of the initial three-dimensional image.

Description

Electronic equipment of three-dimensional formation of image
Technical Field
The invention relates to the technical field of three-dimensional imaging, in particular to electronic equipment for three-dimensional imaging.
Background
Vision is the most direct and dominant approach to human observation and cognition in the world. The human vision can not only sense the brightness, color, texture information and motion condition of the surface of an object, but also judge the shape, space and space position (depth and distance) of the object. The three-dimensional Depth Perception Device (3D Depth Perception Device) is a novel stereoscopic vision sensor, and can acquire high-precision and high-resolution Depth map information (distance information) in real time to perform real-time identification, motion capture and scene Perception of three-dimensional images. Currently "virtual worlds are approaching the real world indefinitely; human-computer interaction patterns will become more natural, intuitive and immersive. The three-dimensional Depth perception device (RGB + Depth) is used as a portal device for interaction between a real physical world and a virtual network world, and can possibly replace a traditional RGB camera in the future to become an important device ubiquitous in the real world, so that a machine or an intelligent device has 3D visual perception capability similar to human eyes, and natural interaction between a human and a machine, virtual interaction between the human and the network world and even interaction between the machine and the machine are facilitated. At present, the deep development of industries such as unmanned aerial vehicle, 3D printing, robot, virtual reality helmet, smart mobile phone, intelligent house, face identification payment, intelligent monitoring needs to solve difficult problems such as environmental perception, man-machine natural interaction, obstacle avoidance, 3D scanning, accurate identification.
The three-dimensional depth perception technology based on structured light coding can accurately acquire depth information, and has the advantages that the acquired depth map information is more stable and reliable, is not influenced by ambient light, and a stereo matching algorithm is simple. In some applications, a user also needs to locally optimize an interested region in a depth map to obtain depth map information with higher local resolution, so that a virtual reality part and a real object space are more matched.
Disclosure of Invention
In view of the above drawbacks of the prior art, the present invention provides an electronic device for three-dimensional imaging, which can optimize a partial region in a depth map to obtain depth map information with higher resolution.
To achieve the above and other objects, the present invention provides an electronic apparatus for three-dimensional imaging, comprising:
a light emitting end including a light emitter and a first actuator; the first actuator is provided with a first lens group and a diffraction optical element; an emergent light beam emitted by the light emitter is irradiated on a target object through the first lens group and the diffractive optical element, and the emergent light beam is reflected by the target object to form a reflected light beam;
a light receiving end comprising a receiver and a second actuator; a second lens group is arranged on the second actuator, the reflected light beam enters the receiver through the second lens group, and the receiver forms a sensing signal according to the reflected light beam;
the analysis end is connected with the optical receiving end, receives the induction signal and generates initial depth data;
the processing end is connected with the analysis end and forms an initial three-dimensional image of the target object according to the initial depth data;
the display end is connected with the processing end and used for displaying the initial three-dimensional image, and the display end divides the initial three-dimensional image into a plurality of sub-images;
when a user selects a part of the sub-image, the spatial resolution of the sub-image is greater than that of the other part of the sub-image.
Further, the light emitting device further comprises a control end, and the control end controls the light emitting end and the light receiving end.
Further, when the user selects any one of the sub-images, the control end controls the outgoing beam to be focused on the area of the sub-image corresponding to the target object and/or controls the reflected beam to be focused on the area of the sub-image corresponding to the target object.
Further, the light emitter includes a plurality of vertical cavity surface emitting lasers that are independent of each other.
Further, a portion of the plurality of vertical cavity surface emitting lasers emits the outgoing beam while forming the initial three-dimensional image; when the user selects any one of the sub-images, the control end controls another part of the vertical cavity surface emitting lasers to emit the emergent light beams, and the emergent light beams irradiate the areas of the sub-images corresponding to the target object.
Further, the power of the VCSEL includes a first power and a second power, and the first power is smaller than the second power.
Further, when the initial three-dimensional image is formed, all the vertical cavity surface emitting lasers emit the outgoing light beam, and the power of the vertical cavity surface emitting lasers is a first power; when the user selects any one of the sub-images, the control end adjusts the intensity of part of the vertical cavity surface emitting laser to the second power and controls the emergent light beam to irradiate the area of the sub-image corresponding to the target object.
Further, when the initial three-dimensional image is formed, part of the vertical cavity surface emitting lasers emit the emergent light beam, and the power of the vertical cavity surface emitting lasers is first power; when the user selects any one of the sub-images, the control end adjusts the intensity of the other part of the vertical cavity surface emitting laser to the second power and controls the emergent light beam to irradiate the area of the sub-image corresponding to the target object.
Further, when the initial three-dimensional image is formed, a first part of the vertical cavity surface emitting lasers emit the emergent light beam, and the power of the vertical cavity surface emitting lasers is first power; when the user selects any one of the sub-images, the control end adjusts the intensity of a second part of the vertical cavity surface emitting lasers to the second power and controls the emergent light beams to irradiate the areas of the sub-images corresponding to the target object; wherein the VCSELs of the first portion and the VCSELs of the second portion have an overlapping region.
Further, the control end adjusts the distance between the light emitter and the first lens group, the distance between the lenses in the first lens group and the distance between the first lens group and the diffractive optical element, so that the emergent light beam is focused on the area of the sub-image corresponding to the target object.
Further, the control end adjusts the distance between the receiver and the second lens group and the distance between the lenses in the second lens group, so that the reflected light beam is focused on the area of the sub-image corresponding to the target object.
Further, the first lens group includes at least two lenses, and the second lens group includes at least two lenses.
In summary, the present invention provides a three-dimensional imaging electronic device, wherein when the electronic device is used, an initial three-dimensional image is formed first, and the initial three-dimensional image is divided into a plurality of sub-images, and when a user selects or touches any sub-image, the sub-image is subjected to three-dimensional modeling again, so that the spatial resolution of the sub-image is greater than the spatial resolutions of other sub-images, thereby optimizing a local image and obtaining depth map information with higher resolution.
Drawings
FIG. 1: the present embodiment proposes a block diagram of an electronic device for three-dimensional imaging.
FIG. 2: schematic diagram of the light emitting end.
FIG. 3: schematic diagram of the light emitter.
FIG. 4: brief schematic of the light receiving end.
FIG. 5: the light paths of the emergent beam and the reflected beam are schematic.
FIG. 6: a schematic diagram of a three-dimensional sub-image formed by adjusting the light emission end.
FIG. 7: a schematic of a three-dimensional sub-image formed by increasing the number of lasers.
FIG. 8: a schematic diagram of a vertical cavity surface emitting laser is lit for the first time.
FIG. 9: a schematic diagram of the VCSEL is lit a second time.
FIG. 10: a schematic of a three-dimensional sub-image formed by increasing the power of the laser.
FIG. 11: a brief schematic of adjusting laser power.
FIG. 12: another schematic diagram of adjusting laser power.
FIG. 13: another schematic diagram of adjusting laser power.
FIG. 14: a schematic of two three-dimensional sub-images is formed.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention.
It should be noted that the drawings provided in the present embodiment are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
As shown in fig. 1, the present embodiment proposes an electronic device 100 for three-dimensional imaging, and the electronic device 100 is not limited to a smartphone. For example, the electronic apparatus 100 may include a tablet terminal, a Digital camera, a game machine, an electronic dictionary, a Personal computer, a PDA (Personal Digital assistant), and another portable terminal device capable of photographing.
As shown in fig. 1, in the present embodiment, the electronic device 100 includes a light emitting end 110, a light receiving end 120, an analyzing end 130, a processing end 140, a display end 150 and a control end 160. The light emitting end 110 emits an outgoing light beam to the target object 200, the outgoing light beam is reflected by the target object 200 to form a reflected light beam, and the light receiving end 120 receives the reflected light beam to form a sensing signal. The analysis end 130 generates initial depth data according to the sensing signal. The processing end 140 forms an initial three-dimensional image of the target object 200 according to the initial depth data, and the initial three-dimensional image is displayed on the display end 150.
As shown in fig. 2, in the present embodiment, the light emitting end 110 includes a light emitter 111, a first actuator 112, a first lens group 113, and a diffractive optical element 114. The first actuator 112 is disposed at the light exit of the light emitter 111. The first lens group 113 and the diffractive optical element 114 are sequentially disposed on the first actuator 112, and the first lens group 113 is located between the light emitter 111 and the diffractive optical element 114. In the present embodiment, the first lens group 113 includes a first lens 113a and a second lens 113b, a distance between the light emitter 111 and the first lens 113a is a first distance d1, a distance between the first lens 113a and the second lens 113b is a second distance d2, a distance between the second lens 113b and the diffractive optical element 114 is a third distance d3, a distance between the diffractive optical element 114 and the target object 200 is a fourth distance d4, and the fourth distance d4 may also be a distance between the electronic device 100 and the target object 200. In the present embodiment, the first lens 113a and the second lens 113b are double convex lenses. In some embodiments, the first lens group 113 may further include more lenses, for example, three, four or more lenses. The first lens 113a and the second lens 113b are made of, for example, glass or a resin material.
As shown in fig. 3, in the present embodiment, the light emitter 111 includes a plurality of lasers 111a, and the plurality of lasers 111a form, for example, a laser array. Fig. 3 shows the light emitter 111 as a laser array comprising, for example, 36 lasers 111 a. In this embodiment, each laser 111a in the laser array is independent, so the number of the lit lasers 111a can be adjusted, for example, one part of the lasers 111a can be activated first, and then another part of the lasers 111a can be activated. In this embodiment, the light emitter 111 is further connected to a light source circuit, and the light source circuit is used for driving the light emitter 111 to operate so as to emit an outgoing light beam. It should be noted that the light source circuit is a light source circuit with a predetermined power, that is, when the light emitter 111 operates normally, the normal operating power of the light source circuit is a predetermined value. That is, the light source circuit can be designed to meet different requirements, so as to adjust the output power of the light emitter 111 to meet different power requirements. For example, the output power of the light emitter 111 may be adjusted by adjusting a circuit element in the light source circuit, such as a resistor or a capacitor. For another example, the light source circuit may be designed to provide the light emitters 111 with different predetermined powers, so as to meet different application requirements, for example, in an electronic device for shooting a person, a smaller power of the light emitters 111 is required to reduce the influence of light on a human body, and in this case, the light source circuit may be designed to provide the light emitters 111 with a smaller power. For example, when an object is photographed and the distance is long, the light emitter 111 with high power is required, and the light source circuit can be designed to provide the light emitter 111 with high power. For example, when the object is photographed and the range is small and the distance is short, the light source circuit may be designed to provide the light emitter 111 with appropriate power, so that resources may be fully utilized, and not only waste due to too large power supply may be avoided, but also accurate depth image information may not be obtained due to too small power supply may be avoided. In the present embodiment, the light emitter 111 has at least a first power and a second power, and the second power is greater than the first power, for example, when a relatively weak current is provided to the light emitter 111, the light emitter 111 is at the first power, and when a relatively strong current is provided to the light emitter 111, the light emitter 111 is at the second power. The laser 111a may be a vertical cavity surface emitting laser.
As shown in fig. 4, in the present embodiment, the light receiving end 120 includes a receiver 121, a second actuator 122, and a second lens group 123. The second lens group 123 is disposed on the second actuator 122, and a reflected light beam formed by reflection by the target object 200 passes through the second lens group 123 into the receiver 121. The second lens group 123 includes two lenses, i.e., a third lens 123a and a fourth lens 123b, and the third lens 123a is positioned between the receiver 121 and the fourth lens 123 b. The distance between the receiver 121 and the third lens 123a is a fifth distance d5, the distance between the third lens 123a and the fourth lens 123b is a sixth distance d6, and the distance between the fourth lens 123b and the target object 200 is a seventh distance d 7. In the present embodiment, since both the light emitting end 110 and the light receiving end 120 are disposed within the electronic device 100, the fourth distance d4 and the seventh distance d7 are substantially the same. The third lens 123a and the fourth lens 123b are, for example, biconvex lenses. In some embodiments, the second lens group 123 may further include more lenses, for example, the second lens group 123 may include three, four or more lenses, and the third lens 123a and the fourth lens 123b are, for example, glass, resin material. In the present embodiment, the receiver 121 is, for example, a light intensity sensor, and the receiver 121 is, for example, a pd (photo diode) chip or an apd (avalanche photo diode) chip, and the light intensity sensor is configured to receive an outgoing light beam emitted by an emission unit reflected by a target and convert an optical signal of the received reflected light beam into a sensing signal.
As shown in fig. 1, in the present embodiment, the analyzing end 130 is connected to the light receiving end 120, and after the light receiving end 120 forms the sensing signal, the analyzing end 130 receives the sensing signal and processes the sensing signal to form initial depth data, where the analyzing end 130 is, for example, a light sensing circuit. In this embodiment, the processing terminal 140 is connected to the analysis terminal 130, the processing terminal 140 is used for processing the initial depth data to form an initial three-dimensional image, and the initial three-dimensional image is displayed on the display terminal 150, and the processing terminal 140 is, for example, a data image module. The display terminal 150 may divide the initial three-dimensional image into a plurality of sub-images. In the present embodiment, the display terminal 150 may be a display screen, and the display terminal 150 may be combined with a touch sensor to form a touch panel. The user can touch the display screen to perform a shooting job. The display terminal 150 is further connected to a control terminal 160, and when the user touches the display terminal 150, the control terminal 160 can control the light emitting terminal 110 and the light receiving terminal 120 to perform a shooting operation.
As shown in fig. 5, the path diagrams of the outgoing light beam and the reflected light beam are shown in fig. 5. In this embodiment, when the user touches the display terminal 150, the control terminal 160 controls the light emitting terminal 110 and the light receiving terminal 120 to perform the shooting operation. The control end 160 controls the light emitter 111 to emit an outgoing light beam, the outgoing light beam passes through the first lens group 113 and the diffractive optical element 114 and reaches the target object 200, the target object 200 reflects the outgoing light beam to form a reflected light beam, and the reflected light beam passes through the second lens group 123 and is received by the receiver 121 to form a sensing signal.
As shown in fig. 1, 5 and 6, in the present embodiment, when the user uses the electronic device 100, the user touches the display terminal 150, the control terminal 160 controls the light emitting terminal 110, the light receiving terminal 120 performs a shooting operation, and simultaneously, an initial three-dimensional image 300 is formed through the analysis terminal 130 and the processing terminal 140, and the initial three-dimensional image 300 is displayed on the display terminal 150. It should be noted that, for clarity of illustration of the initial three-dimensional image 300, the initial three-dimensional image 300 is extracted, and the initial three-dimensional image 300 is actually located on the display terminal 150 of the electronic device 100. As can be seen in fig. 6, the initial three-dimensional image 300 is divided into a plurality of sub-images, and the division of the initial three-dimensional image 300 into nine sub-images is shown in fig. 6. In forming the initial three-dimensional image 300, the outgoing light beam emitted from the light emitting end 110 is focused on the entirety of the target object 200.
As shown in fig. 6, in the present embodiment, the initial three-dimensional image 300 is divided into a plurality of sub-images on the display terminal 150, for example, the initial three-dimensional image 300 is divided into nine sub-images, i.e., a first sub-image 301, a second sub-image 302, a third sub-image 303, a fourth sub-image 304, a fifth sub-image 305, a sixth sub-image 306, a seventh sub-image 307, an eighth sub-image 308, and a ninth sub-image 309. Of course, the display terminal 150 may also divide the initial three-dimensional image 300 into more sub-images, for example, into 16 sub-images.
As shown in fig. 1, fig. 2 and fig. 6, in the present embodiment, when the user selects the first sub-image 301, the first sub-image 301 is modeled again, so that the spatial resolution of the first sub-image 301 is greater than the spatial resolutions of the other sub-images. In this embodiment, when the user selects the first sub-image 301, the control terminal 160 controls the light emitting terminal 110 to emit the outgoing light beam and focuses the outgoing light beam on the first sub-image 301 corresponding to the first area 201 on the target object 200. Since the fourth distance d4 between the light emitting end 110 and the target object 200 is fixed, the first distance d1, the second distance d2 and the third distance d3 can be adjusted by the first actuator 112 so that the outgoing light beam is focused on the first area 201 with a minimum spot of the outgoing light beam impinging on the first area 201. When the emergent beam is focused on the first region 201, the emergent beam reflected by the first region 201 is received by the light receiving end 120, and the processing end 140 forms a three-dimensional sub-image 310 on the display end 150 through the analysis end 130, and since the emergent beam is focused on the first region 201, the spatial resolution of the formed three-dimensional sub-image 310 is greater than that of the original three-dimensional image 300, that is, the spatial resolution of the three-dimensional sub-image 310 is greater than that of other sub-images. In this embodiment, the three-dimensional sub-image 310 can be adjusted by adjusting the light emitting end 110 such that the emergent light beam is focused on the first region 201, such that the spatial resolution of the three-dimensional sub-image 310 is greater than the spatial resolution of the initial three-dimensional image 300.
As shown in fig. 4 and 6, in the present embodiment, since the light emitting end 110 and the light receiving end 120 are located in the electronic device 100, the fourth distance d4 and the seventh distance d7 are substantially the same. When the initial three-dimensional image 300 is formed, and the user selects the first sub-image 301, the electronic device 100 may further focus the reflected light beam on the first area 201 through the light receiving end 120, receive the reflected light beam through the light receiving end 120, minimize the light point of the reflected light beam received by the light receiving end 120, and form the three-dimensional sub-image 310 on the display end 150 through the analysis end 130 and the processing end 140, where the resolution of the three-dimensional sub-image 310 is highest. In the present embodiment, the control end 160 adjusts the fifth distance d5 and the sixth distance d6 by the second actuator 122 so that the reflected light beam is focused on the first region 201.
As shown in fig. 4-6, in this embodiment, when the emitted light beam impinges on the object in the form of a point cloud, each light spot in the point cloud represents a location information determined by the location of the pixel on the receiver 121. The smaller the spot of the reflected beam hitting the receiver 121, the fewer the number of Pixel points (pixels) covered by the spot, and the more accurate the Pixel point location, the more accurate the spot location.
As shown in fig. 6, in the present embodiment, when the user selects the first sub-image 301, the control terminal 160 may also control the light emitting terminal 110 and the light receiving terminal 120 simultaneously, so that the spatial resolution of the formed three-dimensional sub-image 310 is greater than the spatial resolution of the initial three-dimensional image 300.
As shown in fig. 7-8, in the present embodiment, when the user uses the electronic device 100, the initial three-dimensional image 300 is displayed on the display terminal 160, and the initial three-dimensional image 300 is divided into a plurality of sub-images, such as a first sub-image 301, a second sub-image 302, a third sub-image 303, a fourth sub-image 304, a fifth sub-image 305, a sixth sub-image 306, a seventh sub-image 307, an eighth sub-image 308, and a ninth sub-image 309. Since the lasers 111a in the light emitter 111 are independent from each other, when the user uses the electronic device 100 to take a picture, a part of the lasers 111a in the light emitter 111 operate to form the initial three-dimensional image 300. For example, it can be seen from fig. 8 that the lasers 111a in the third to sixth rows within the light emitter 111 operate to form an initial three-dimensional image 300.
As shown in fig. 7 and fig. 9, in the present embodiment, when the user selects the first sub-image 301 on the display terminal 150, the control terminal 160 may light or activate other lasers 111a, and the outgoing beams emitted by these lasers 111a are focused on the corresponding area of the first sub-image 301 on the target object 200, that is, the outgoing beams emitted by the lasers 111a are focused on the first area 201. As can be seen from fig. 9, the control terminal 160 controls the lasers 111a in the third to sixth rows to emit the outgoing beams so as to form the initial three-dimensional image 300, and when the user selects the first sub-image 301, the control terminal 160 lights or excites the lasers 111a in the first to second rows, so that the outgoing beams emitted by the lasers 111a in the first to second rows are focused on the first area 201. In the present embodiment, only the lasers 111a in the third to sixth rows are operated when the initial three-dimensional image 300 is formed, and thus the initial three-dimensional image 300 is sparse. When the user selects the first sub-image 301, the lasers 111a in the first to second rows are activated, and the emergent beams emitted by the lasers 111a in the first to second rows impinge on the first area 201, so that the light spot density of the first area 201 is increased, that is, the light spot density of the three-dimensional sub-image 310 formed on the display terminal 150 is increased, and the spatial resolution of the three-dimensional sub-image 310 is greater than that of the initial three-dimensional image 300, that is, the spatial resolution of the three-dimensional sub-image 310 is greater than that of other sub-images.
As shown in fig. 1, 10-11, in the present embodiment, since the laser 111a in the light emitter 111 has the first power and the second power, the first power is smaller than the second power. In fig. 11 (a), when forming the initial three-dimensional image 300, the power of all the lasers 111a can be controlled to be the first power, for example, by the control terminal 160, and the light emitter 111 is controlled by a smaller current, for example, so that the intensity of the light spot on the target object 200 is weaker, and thus the intensity of the light spot of the initial three-dimensional image 300 is weaker. In fig. 11 (b), when the user selects the first sub-image 301, the control terminal 160 controls the power of part of the lasers 111a to be the second power, i.e. adjusts the power of the lasers 111a in the fifth row to the sixth row to be the second power, for example, the lasers 111a in the fifth to sixth rows are controlled with a larger current, so that the intensity of the outgoing beam emitted by the lasers 111a in the fifth to sixth rows is increased, and the outgoing beams emitted by the lasers 111a in the fifth to sixth rows impinge on the area of the first sub-image 301 corresponding to the target object 200, i.e. the outgoing beams emitted by the lasers 111a in the fifth to sixth rows impinge on the first area 201, the intensity of the light spot of the first area 201 is increased and thus the spatial resolution of the three-dimensional sub-image 310 formed is larger than the spatial resolution of the original three-dimensional image 300, i.e. the spatial resolution of the three-dimensional sub-image 310 is larger than the spatial resolution of the other sub-images.
As shown in fig. 1, 10 and 12, in fig. 12 (a), when the three-dimensional image 300 is formed, the power of the laser 111a of the portion may be controlled to be the first power by, for example, the control terminal 160, and the lasers 111a in the first row to the fourth row are controlled by, for example, a smaller current, so that the intensity of the laser 111a in the first row to the fourth row is weaker, and thus the intensity of the light spot impinging on the target object 200 is weaker, and thus the intensity of the light spot of the initial three-dimensional image 300 is weaker. In fig. 12 (b), when the user selects the first sub-image 301, the control terminal 160 controls the power of another part of the lasers 111a to be the second power, i.e. adjusts the power of the lasers 111a in the fifth row to the sixth row to be the second power, for example, the lasers 111a in the fifth to sixth rows are controlled with a larger current, so that the intensity of the outgoing beam emitted by the lasers 111a in the fifth to sixth rows is increased, and the outgoing beams emitted by the lasers 111a in the fifth to sixth rows impinge on the area of the first sub-image 301 corresponding to the target object 200, i.e. the outgoing beams emitted by the lasers 111a in the fifth to sixth rows impinge on the first area 201, the intensity of the light spot of the first area 201 is increased and thus the spatial resolution of the three-dimensional sub-image 310 formed is larger than the spatial resolution of the original three-dimensional image 300, i.e. the spatial resolution of the three-dimensional sub-image 310 is larger than the spatial resolution of the other sub-images.
As shown in fig. 1, fig. 10 and fig. 13, in fig. 13 (a), when the three-dimensional image 300 is formed, the power of the laser 111a of the portion may be controlled to be the first power by, for example, the control terminal 160, and the lasers 111a of the first row to the fourth row are controlled by, for example, a smaller current, so that the intensity of the laser 111a of the first row to the fourth row is weaker, and thus the intensity of the light spot impinging on the target object 200 is weaker, and thus the intensity of the light spot of the initial three-dimensional image 300 is weaker. In fig. 13 (b), when the user selects the first sub-image 301, the control terminal 160 controls the power of another part of the lasers 111a to be the second power, i.e. adjusts the power of the lasers 111a in the fourth to sixth rows to be the second power, for example, the lasers 111a in the fourth to sixth rows are controlled with a larger current, so that the intensity of the outgoing beam emitted by the lasers 111a in the fourth to sixth rows is increased, and the outgoing beams emitted by the lasers 111a in the fourth to sixth rows impinge on the area of the first sub-image 301 corresponding to the target object 200, i.e. the outgoing beams emitted by the lasers 111a in the fourth to sixth rows impinge on the first area 201, the intensity of the light spot of the first area 201 is increased and thus the spatial resolution of the three-dimensional sub-image 310 formed is larger than the spatial resolution of the original three-dimensional image 300, i.e. the spatial resolution of the three-dimensional sub-image 310 is larger than the spatial resolution of the other sub-images. As can be seen from fig. 13, when the user selects the first sub-image 301, the control terminal 160 simultaneously adjusts the power of the laser 111a in the fourth row from the first power to the second power.
As shown in fig. 14, in the present embodiment, when the initial three-dimensional image 300 is formed, the display terminal 150 divides the initial three-dimensional image 300 into, for example, a first sub-image 301, a second sub-image 302, a third sub-image 303, a fourth sub-image 304, a fifth sub-image 305, a sixth sub-image 306, a seventh sub-image 307, an eighth sub-image 308, and a ninth sub-image 309. When the user selects the first sub-image 301 and the fourth sub-image 304 at the display terminal 150, a first three-dimensional sub-image 311 and a second three-dimensional sub-image 312 are formed. The first three-dimensional sub-image 311 is formed by increasing the number of lasers 111a, the increased number of outgoing beams emitted by the lasers 111a impinging on the first area 201 of the first sub-image 301 corresponding to the target object, i.e. increasing the spot density of the first area 201. The second three-dimensional sub-image 312 is formed by increasing the power of the laser 111a, and after the power of the laser 111a is increased, the outgoing beam emitted by the laser 111a hits the fourth area 204 of the fourth sub-image 304 corresponding to the target object, that is, the spot intensity of the fourth area 204 is increased. The spatial resolution of the first three-dimensional sub-image 311 and the second three-dimensional sub-image 312 is therefore greater than the spatial resolution of the initial three-dimensional image 300, i.e. the spatial resolution of the first three-dimensional sub-image 311 and the second three-dimensional sub-image 312 is greater than the spatial resolution of the other sub-images. Of course, in some embodiments, the user may also select more sub-images, such as the fifth sub-image 305, the sixth sub-image 306, and the seventh sub-image 307, to form the third, fourth, and fifth three-dimensional sub-images.
As shown in fig. 14, in this embodiment, when forming the initial three-dimensional image 300, the user may also select more sub-images, and focus the outgoing beam on the selected sub-image corresponding to the area on the target object, or focus the reflected beam on the selected sub-image corresponding to the area on the target object, or focus both the outgoing beam and the reflected beam on the selected sub-image corresponding to the area on the target object.
As shown in fig. 14, in the present embodiment, when the user selects any sub-image, the electronic device 100 performs three-dimensional modeling on the sub-image again, so that the spatial resolution of the sub-image selected by the user is greater than the spatial resolutions of the other sub-images.
As shown in fig. 1, in this embodiment, the three-dimensional imaging electronic device 100 may be used in a video image capturing apparatus, such as a video camera, a camera, etc., so that in the later video image processing, the special effect prop can be inserted into any position in the video image through simple post processing, and in this way, on one hand, the fidelity of the special effect can be enhanced, and on the other hand, the shooting is not limited by the shooting location, and the manufacturing cost is greatly reduced.
As shown in fig. 1, in this embodiment, the three-dimensional imaging electronic device 100 may be configured in a household device, such as an air conditioner, a refrigerator, a television, and the like, to change an interaction mode between a user and the household device, for example, to implement functions such as gesture control of the household device.
As shown in fig. 1, in the present embodiment, the three-dimensional imaging electronic device 100 may be assembled in a robot device, providing three-dimensional vision capability to the robot device, so that the robot device can realize functions of spatial positioning, path planning, obstacle avoidance, gesture manipulation, etc., to make the robot device better serve human beings, wherein the robot includes an entertainment robot, a medical robot, a home robot, a field robot, etc.
As shown in fig. 1, in this embodiment, the electronic device 100 for three-dimensional imaging may be incorporated into a security monitoring device, for example, a monitoring device, to improve the accuracy of analysis of the security monitoring device, and add intelligent applications such as behavior analysis.
As shown in fig. 1, in the present embodiment, the electronic device 100 for three-dimensional imaging may be applied to an unmanned device, such as an unmanned automobile, an unmanned plane, an unmanned ship, etc., and the electronic device 100 for three-dimensional imaging provides a three-dimensional visual basis for the unmanned device to provide technical support for unmanned driving.
As shown in fig. 1, in the present embodiment, the electronic device 100 for three-dimensional imaging can be assembled in a medical device, such as an endoscope, an enteroscope, etc., so that the medical device can perform three-dimensional observation on a human organ to obtain more comprehensive information of the human organ.
In summary, the present invention provides a three-dimensional imaging electronic device, wherein when the electronic device is used, an initial three-dimensional image is first formed, and the initial three-dimensional image is divided into a plurality of sub-images, and when a user selects or touches any sub-image, the sub-image is subjected to three-dimensional modeling again, so that the spatial resolution of the sub-image is greater than the spatial resolutions of other sub-images.
The above description is only a preferred embodiment of the present application and a description of the applied technical principle, and it should be understood by those skilled in the art that the scope of the present invention related to the present application is not limited to the technical solution of the specific combination of the above technical features, and also covers other technical solutions formed by any combination of the above technical features or their equivalent features without departing from the inventive concept, for example, the technical solutions formed by mutually replacing the above features with (but not limited to) technical features having similar functions disclosed in the present application.
Other technical features than those described in the specification are known to those skilled in the art, and are not described herein in detail in order to highlight the innovative features of the present invention.

Claims (9)

1. A three-dimensional imaging electronic device, comprising:
a light emitting end including a light emitter and a first actuator; the first actuator is provided with a first lens group and a diffraction optical element; an emergent light beam emitted by the light emitter is irradiated on a target object through the first lens group and the diffractive optical element, and the emergent light beam is reflected by the target object to form a reflected light beam;
a light receiving end comprising a receiver and a second actuator; a second lens group is arranged on the second actuator, the reflected light beam enters the receiver through the second lens group, and the receiver forms a sensing signal according to the reflected light beam;
the analysis end is connected with the optical receiving end, receives the induction signal and generates initial depth data;
the processing end is connected with the analysis end and forms an initial three-dimensional image of the target object according to the initial depth data;
the display end is connected with the processing end and used for displaying the initial three-dimensional image, and the display end divides the initial three-dimensional image into a plurality of sub-images;
when a user selects a part of the sub-image, the spatial resolution of the sub-image is greater than that of the other part of the sub-image;
the device also comprises a control end, wherein the control end controls the light emitting end and the light receiving end;
when the user selects any sub-image, the control end controls the emergent light beam to focus on the area of the sub-image corresponding to the target object and/or controls the reflected light beam to focus on the area of the sub-image corresponding to the target object.
2. The electronic device of claim 1, wherein the light emitter comprises a plurality of vertical cavity surface emitting lasers that are independent of each other.
3. The electronic device of claim 2, wherein a portion of the plurality of vertical-cavity surface-emitting lasers emit the exit beams when forming the initial three-dimensional image; when the user selects any one of the sub-images, the control end controls another part of the vertical cavity surface emitting lasers to emit the emergent light beams, and the emergent light beams irradiate the areas of the sub-images corresponding to the target object.
4. The electronic device of claim 2, wherein the power of the VCSEL includes a first power and a second power, the first power being less than the second power.
5. The electronic device of claim 4, wherein all of the VCSELs emit the exit beam when forming the initial three dimensional image, the VCSELs having a first power; when the user selects any one of the sub-images, the control end adjusts the intensity of part of the vertical cavity surface emitting laser to the second power and controls the emergent light beam to irradiate the area of the sub-image corresponding to the target object.
6. The electronic device of claim 4, wherein a portion of the VCSEL emits the exit beam when forming the initial three dimensional image, the VCSEL having a first power; when the user selects any one of the sub-images, the control end adjusts the intensity of the other part of the vertical cavity surface emitting laser to the second power and controls the emergent light beam to irradiate the area of the sub-image corresponding to the target object.
7. The electronic device of claim 4, wherein a first portion of the VCSELs emit the exit beam when forming the initial three dimensional image, the VCSELs having a first power; when the user selects any one of the sub-images, the control end adjusts the intensity of a second part of the vertical cavity surface emitting lasers to the second power and controls the emergent light beams to irradiate the areas of the sub-images corresponding to the target object; wherein the VCSELs of the first portion and the VCSELs of the second portion have an overlapping region.
8. The electronic device of claim 1, wherein the control end adjusts a distance between the light emitter and the first lens group, a distance between lenses in the first lens group, and a distance between the first lens group and the diffractive optical element, so that the emergent beam is focused on an area of the sub-image corresponding to the target object.
9. The electronic device of claim 1, wherein the control end adjusts a distance between the receiver and the second lens group and a distance between lenses in the second lens group to focus the reflected light beam on an area of the sub-image corresponding to the target object.
CN202010606925.6A 2020-06-30 2020-06-30 Electronic equipment of three-dimensional formation of image Active CN111505836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010606925.6A CN111505836B (en) 2020-06-30 2020-06-30 Electronic equipment of three-dimensional formation of image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010606925.6A CN111505836B (en) 2020-06-30 2020-06-30 Electronic equipment of three-dimensional formation of image

Publications (2)

Publication Number Publication Date
CN111505836A CN111505836A (en) 2020-08-07
CN111505836B true CN111505836B (en) 2020-09-22

Family

ID=71873813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010606925.6A Active CN111505836B (en) 2020-06-30 2020-06-30 Electronic equipment of three-dimensional formation of image

Country Status (1)

Country Link
CN (1) CN111505836B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114460805A (en) * 2020-10-21 2022-05-10 中国科学院国家空间科学中心 Shielding scattering imaging system based on high-pass filtering

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10514256B1 (en) * 2013-05-06 2019-12-24 Amazon Technologies, Inc. Single source multi camera vision system
CN209894976U (en) * 2019-03-15 2020-01-03 深圳奥比中光科技有限公司 Time flight depth camera and electronic equipment
CN110716189A (en) * 2019-09-27 2020-01-21 深圳奥锐达科技有限公司 Transmitter and distance measurement system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10514256B1 (en) * 2013-05-06 2019-12-24 Amazon Technologies, Inc. Single source multi camera vision system
CN209894976U (en) * 2019-03-15 2020-01-03 深圳奥比中光科技有限公司 Time flight depth camera and electronic equipment
CN110716189A (en) * 2019-09-27 2020-01-21 深圳奥锐达科技有限公司 Transmitter and distance measurement system

Also Published As

Publication number Publication date
CN111505836A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
US9912862B2 (en) System and method for assisted 3D scanning
US8123361B2 (en) Dual-projection projector and method for projecting images on a plurality of planes
US8570372B2 (en) Three-dimensional imager and projection device
EP3274653B1 (en) Depth mapping with a head mounted display using stereo cameras and structured light
CN110476148B (en) Display system and method for providing multi-view content
US20150245010A1 (en) Apparatus and system for interfacing with computers and other electronic devices through gestures by using depth sensing and methods of use
US20120326959A1 (en) Region of interest segmentation
EP4156681A1 (en) Camera system, mobile terminal, and three-dimensional image acquisition method
CN108718406B (en) Variable-focus 3D depth camera and imaging method thereof
CN108463767A (en) Beam angle sensor in virtually/augmented reality system
CN110895678A (en) Face recognition module and method
CN108683902B (en) Target image acquisition system and method
CN107783353A (en) For catching the apparatus and system of stereopsis
KR102627014B1 (en) electronic device and method for recognizing gestures
EP3664447A1 (en) Structured light projector, three-dimensional camera module and terminal device
WO2019184183A1 (en) Target image acquisition system and method
CN110471580B (en) Information equipment interaction method and system based on optical labels
CN111505836B (en) Electronic equipment of three-dimensional formation of image
CN113344839A (en) Depth image acquisition device, fusion method and terminal equipment
JP2020057924A (en) Imaging device and image processing method
CN112424673B (en) Infrared projector, imaging device and terminal device
KR102606835B1 (en) Electronic device for generating depth map and method thereof
CN209044429U (en) A kind of equipment
CN110888536B (en) Finger interaction recognition system based on MEMS laser scanning
TWI753452B (en) Projecting a structured light pattern from an apparatus having an oled display screen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant