CN111458692A - Depth information processing method and system and electronic equipment - Google Patents

Depth information processing method and system and electronic equipment Download PDF

Info

Publication number
CN111458692A
CN111458692A CN202010425515.1A CN202010425515A CN111458692A CN 111458692 A CN111458692 A CN 111458692A CN 202010425515 A CN202010425515 A CN 202010425515A CN 111458692 A CN111458692 A CN 111458692A
Authority
CN
China
Prior art keywords
depth information
field
view
target
tof
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010425515.1A
Other languages
Chinese (zh)
Other versions
CN111458692B (en
Inventor
孟玉凰
黄河
楼歆晔
林涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kunyou Technology Co ltd
Original Assignee
Shanghai North Ocean Photonics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai North Ocean Photonics Technology Co Ltd filed Critical Shanghai North Ocean Photonics Technology Co Ltd
Publication of CN111458692A publication Critical patent/CN111458692A/en
Application granted granted Critical
Publication of CN111458692B publication Critical patent/CN111458692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4804Auxiliary means for detecting or identifying lidar signals or the like, e.g. laser illuminators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone

Abstract

A depth information processing method, a system thereof and an electronic device are provided. The depth information processing method includes the steps of: acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of field subareas according to a certain time sequence through a subarea TOF, and the field subareas jointly form a complete target field; and according to the partition arrangement of the plurality of view field partitions, splicing and processing the plurality of partition depth information to obtain complete depth information corresponding to the target view field.

Description

Depth information processing method and system and electronic equipment
Technical Field
The present invention relates to the field of TOF technologies, and in particular, to a depth information processing method and system, and an electronic device.
Background
Currently, in the mainstream scheme of the three-dimensional sensing technology, the TOF (time of flight) technology is widely concerned and applied in industries such as smart phones and the like by virtue of the advantages of small size, low error, direct output of depth data, strong interference resistance and the like. From the technical implementation, TOF has two types: one is direct ranging TOF (dTOF for short), i.e. determining distance by emitting, receiving light, and measuring photon time of flight; the other is the well-established indirect ranging tof (ietf) in the market, i.e. the distance is determined by converting the time of flight by measuring the phase difference between the transmitted and received waveforms. The direct distance measurement method is characterized in that the light is transmitted after being subjected to high-frequency modulation, the pulse repetition frequency is very high, the pulse width can reach ns to ps magnitude, very high single pulse energy can be obtained in a very short time, the signal to noise ratio can be increased while the low power consumption of a power supply is kept, a relatively long detection distance can be realized, the influence of ambient light on the distance measurement precision is reduced, and the requirements on the sensitivity and the signal to noise ratio of a detection device are lowered. In addition, the high frequency and narrow pulse width characteristics of the direct ranging TOF enable the average energy of the TOF to be small, and eye safety can be guaranteed.
However, the detection distance of the existing direct ranging TOF is proportional to power consumption, that is, the longer the detection distance of the direct ranging TOF is, the higher the required power consumption is, so that the existing direct ranging TOF has to be configured with a light source with higher power in order to realize detection at a longer distance to meet the needs of application scenarios such as VR/AR, and the existing direct ranging TOF often performs short-distance and long-distance detection under the condition of higher power consumption, thereby causing resource waste and affecting the application and popularization of the TOF technology.
Disclosure of Invention
An advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, which can process a plurality of pieces of sectional depth information acquired through a sectional TOF to obtain complete depth information of a target field of view, thereby implementing remote detection of the sectional TOF at lower power consumption.
Another advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, wherein, in an embodiment of the present invention, the depth information processing method can combine and process depth information of different field partitions acquired at different times into depth information of an entire target field, so as to allow the partition TOF to detect only a smaller field partition at the same time, which helps to reduce power consumption required for long-distance detection.
Another advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the depth information processing method can further fuse depth information acquired via a segmented TOF to a two-dimensional color image to obtain a three-dimensional color image, which is convenient for wide application in AR/VR.
Another advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the depth information processing method can improve data processing efficiency and shorten processing time, so as to meet the current real-time requirement of an electronic device such as an AR/VR product.
Another advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, wherein the depth information processing method does not need to adopt a complex structure and a huge amount of computation, and has low requirements on software and hardware. Therefore, the present invention successfully and effectively provides a solution, not only providing a depth information processing method and system thereof, and an electronic device, but also increasing the practicability and reliability of the depth information processing method and system thereof, and the electronic device.
To achieve at least one of the above advantages or other advantages and objects, the present invention provides a depth information processing method including the steps of:
acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of field subareas according to a certain time sequence through a subarea TOF, and the field subareas jointly form a complete target field; and
and splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of field subareas to obtain complete depth information corresponding to the target field.
In an embodiment of the invention, the sectional TOF detects different sectional areas of the field of view according to a certain time sequence, so as to obtain different sectional depth information at different time instants.
In an embodiment of the present invention, the light source unit of the TOF is divided into a plurality of light source sections arranged in a specific manner, wherein the light source sections correspond to the field-of-view sections in a one-to-one manner, and the light source sections are lit up in sections according to a certain timing sequence to illuminate the corresponding field-of-view sections.
In an embodiment of the present invention, the depth information processing method further includes the steps of:
acquiring color information of the target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module; and
and processing the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target field of view.
In an embodiment of the invention, the conventional camera module is an RGB camera.
In an embodiment of the present invention, in the step of acquiring color information of the target field of view, wherein the color information is obtained by shooting the target field of view by a conventional camera module:
the color information is obtained by shooting the target field of view through the conventional camera module at the time of TOF detecting the gaps of the field of view partitions.
In an embodiment of the present invention, in the step of fusion processing the full depth information and the color information to obtain a three-dimensional color image of the target field of view:
and fusing the complete depth information to the color information based on external parameters between the conventional camera module and the sectional TOF to obtain the three-dimensional color image.
In an embodiment of the present invention, before the step of processing the depth information of the plurality of partitions in a splicing manner according to the partition arrangement of the plurality of field partitions to obtain the complete depth information corresponding to the target field, the method further includes the steps of:
and respectively carrying out distortion correction on the corresponding subarea depth information to obtain the corrected subarea depth information.
According to another aspect of the present invention, the present invention further provides a depth information processing method, including the steps of:
acquiring color information of a target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module;
acquiring a plurality of pieces of zone depth information, wherein the plurality of pieces of zone depth information are obtained by detecting a plurality of field-of-view zones respectively by a zone TOF at a timing, and the plurality of field-of-view zones collectively form the target field of view; and
and respectively carrying out fusion processing on the plurality of subarea depth information and the color information to obtain a three-dimensional color image of the target view field.
In an embodiment of the present invention, in the step of performing fusion processing on the plurality of pieces of partition depth information and the color information to obtain a three-dimensional color image of the target field of view, the step of:
and sequentially fusing the corresponding subarea depth information to the color information according to the detection time sequence of the subarea TOF on the field of view subareas to obtain the three-dimensional color image.
In an embodiment of the present invention, in the step of performing fusion processing on the plurality of pieces of partition depth information and the color information to obtain a three-dimensional color image of the target field of view, the step of:
immediately after each of the divisional depth information is obtained by the divisional TOF, the corresponding divisional depth information is subjected to fusion processing with the color information to obtain the three-dimensional color image after the last obtained one of the divisional depth information is fused to the color information.
According to another aspect of the present invention, the present invention further provides a depth information processing system, comprising:
the depth information acquisition module is used for acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of field subareas according to a certain time sequence through the subarea TOF, and the field subareas jointly form a complete target field; and
and the splicing processing module is used for splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of view field subareas so as to obtain complete depth information corresponding to the target view field.
In an embodiment of the present invention, the depth information processing system further includes:
the color information acquisition module is used for acquiring color information of the target view field, wherein the color information is obtained by shooting the target view field through a conventional camera module; and
and the simultaneous fusion processing module is used for processing the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target view field.
In an embodiment of the invention, the depth information processing system further includes an aberration correction module, configured to perform aberration correction on the corresponding partition depth information respectively to obtain corrected partition depth information.
According to another aspect of the present invention, the present invention further provides a depth information processing system, comprising:
the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is obtained by shooting the target view field through a conventional camera module;
a depth information acquisition module, configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by detecting a plurality of field partitions by a partition TOF respectively according to a certain time sequence, and the plurality of field partitions together form the target field; and
and the time-sharing fusion processing module is used for sequentially fusing the plurality of subarea depth information with the color information to obtain a three-dimensional color image of the target view field.
According to another aspect of the present invention, the present invention further provides an electronic device comprising:
at least one processor configured to execute instructions; and
a memory communicatively coupled to the at least one processor, wherein the memory has at least one instruction, wherein the instruction is executable by the at least one processor to cause the at least one processor to perform some or all of the steps of a depth information processing method, wherein the depth information processing method comprises the steps of:
acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of field subareas according to a certain time sequence through a subarea TOF, and the field subareas jointly form a complete target field; and
and splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of field subareas to obtain complete depth information corresponding to the target field.
According to another aspect of the present invention, the present invention further provides an electronic device comprising:
an electronic device body;
at least one section TOF, wherein the section TOF is configured to the electronic device body and is used for section detection of a target field of view through the section TOF; and
at least one depth information processing system, wherein the depth information processing system is configured on the electronic device body or the section TOF, and the depth information processing system comprises:
a depth information acquiring module, configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by the partition TOF respectively detecting a plurality of field partitions according to a certain time sequence, and the plurality of field partitions together form the complete target field; and
and the splicing processing module is used for splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of view field subareas so as to obtain complete depth information corresponding to the target view field.
In an embodiment of the invention, the electronic device further includes a conventional camera module, wherein the conventional camera module is configured on the electronic device body and is used for shooting the target field of view through the conventional camera module; the depth information processing system further comprises a color information acquisition module and a simultaneous fusion processing module which are mutually connected in a communication way, wherein the color information acquisition module is used for acquiring color information of the target field of view, and the color information is obtained by shooting the target field of view through the conventional camera module; the simultaneous fusion processing module is used for processing the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target view field.
According to another aspect of the present invention, the present invention further provides an electronic device comprising:
an electronic device body;
the electronic equipment comprises an electronic equipment body, at least one conventional camera module, a camera module and a camera module, wherein the conventional camera module is arranged on the electronic equipment body and is used for shooting a target field of view through the conventional camera module;
at least one section TOF, wherein the section TOF is configured on the electronic device body and is used for detecting the target field of view through the section TOF; and
at least one depth information processing system, wherein the depth information processing system is configured on the electronic device body or the section TOF, and the depth information processing system comprises:
the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is obtained by shooting the target view field through a conventional camera module;
a depth information acquisition module, configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by detecting a plurality of field partitions by a partition TOF respectively according to a certain time sequence, and the plurality of field partitions together form the target field; and
and the time-sharing fusion processing module is used for sequentially fusing the plurality of subarea depth information with the color information to obtain a three-dimensional color image of the target view field.
Further objects and advantages of the invention will be fully apparent from the ensuing description and drawings.
These and other objects, features and advantages of the present invention will become more fully apparent from the following detailed description, the accompanying drawings and the claims.
Drawings
Fig. 1 is a flowchart illustrating a depth information processing method according to a first embodiment of the invention.
Fig. 2 shows an example of the sectional illumination of the sectional TOF according to the first embodiment of the invention described above.
Fig. 3 shows an example of light source partitioning of the partitioned TOF according to the above-described first embodiment of the present invention.
Fig. 4 is a flowchart illustrating a depth information processing method according to a second embodiment of the invention.
FIG. 5 shows a block diagram schematic of a depth information processing system according to an embodiment of the invention.
Fig. 6 shows a block diagram schematic of a depth information processing system according to another embodiment of the invention.
FIG. 7 shows a block diagram schematic of an electronic device according to an embodiment of the invention.
Fig. 8 shows a schematic structural diagram of another electronic device according to an embodiment of the invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
In the present invention, the terms "a" and "an" in the claims and the description should be understood as meaning "one or more", that is, one element may be one in number in one embodiment, and the element may be more than one in number in another embodiment. The terms "a" and "an" should not be construed as limiting the number unless the number of such elements is explicitly recited as one in the present disclosure, but rather the terms "a" and "an" should not be construed as being limited to only one of the number.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, it should be noted that, unless explicitly stated or limited otherwise, the terms "connected" and "connected" are to be interpreted broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Illustrative method
Referring to fig. 1 to 3 of the drawings, a depth information processing method according to an embodiment of the present invention is illustrated. Specifically, as shown in fig. 1, the depth information processing method includes the steps of:
s110: acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of field subareas according to a certain time sequence through a subarea TOF, and the field subareas jointly form a complete target field; and
s120: and according to the partition arrangement of the plurality of view field partitions, splicing and processing the plurality of partition depth information to obtain complete depth information corresponding to the target view field.
Notably, since the entire target field of view is formed by a plurality of the field of view segments, the area of each of the field of view segments is smaller than the entire target field of view; meanwhile, the sectional TOF detects different field-of-view sections according to a certain time sequence, so that the sectional TOF detects different field-of-view sections at different moments, and compared with the conventional TOF detection mode of detecting the whole target field of view at the same time, the sectional TOF can detect the whole target field of view only with lower power consumption, so that the sectional TOF has obvious application potential in the field of consumer electronics, can be used for an intelligent mobile phone, acquires real three-dimensional information in an external long-distance range, realizes multiple AR level applications, and creates a new selling point; but also in VR/AR to meet the ever-increasing demand for motion capture and recognition. In addition, in addition to the consumer electronics field, the segmented TOF may also support various functions, including gesture sensing or proximity detection of various innovative user interfaces, such as in the fields of computers, home appliances and industrial automation, service robots, unmanned planes, internet of things, etc., with broad application prospects.
In other words, when the target field of view is divided into N field partitions with equal areas, compared with the conventional TOF that the whole target field of view is detected at the same time, the power consumption of the partitioned TOF adopted by the invention is only N times of the existing power consumption when the same distance detection is completed; and the detection distance of the section TOF used by the invention can reach N times of the existing distance when the section TOF consumes the same power consumption, namely, the section TOF used by the invention can realize the detection of a longer distance under the condition of lower power consumption, and is beneficial to expanding the detection range of the TOF.
More specifically, the light source unit of the sectional TOF of the present invention is divided into a plurality of light source sections arranged specifically, wherein the light source sections correspond to the field of view sections one by one, so that the light source sections are lit up in sections according to a certain timing sequence, and the corresponding field of view sections are illuminated by the light source sections of the sectional TOF, respectively. Meanwhile, the receiving unit of the section TOF receives the optical signals respectively reflected back from each of the field-of-view sections to determine the depth information of each of the field-of-view sections (i.e., the section depth information) by measuring the time of flight of the optical signals; finally, the depth information (i.e. the complete depth information) of the whole target field of view is obtained by stitching the plurality of pieces of partitioned depth information, so that the partitioned TOF can remotely detect the whole target field of view with low power consumption.
Preferably, the adjacent light source regions are not overlapped with each other, and the adjacent field of view regions are not overlapped with each other, so as to avoid the occurrence of repeatedly detected regions, which contributes to further reducing power consumption. Therefore, the depth information processing method of the invention can easily splice a plurality of subarea depth information into a complete information only according to the specific arrangement of the field subareas without cutting any depth information. In particular, the adjacent field of view partitions are overlapped edge to edge, so that the overlapping between the adjacent field of view partitions is avoided, and meanwhile, the existence of an undetected region due to the occurrence of a gap between the adjacent field of view partitions can also be avoided, so that the whole target field of view is detected comprehensively without dead angles.
More preferably, the light source units of the segmented TOF are uniformly divided into n × m light source segments so that the areas occupied by different light source segments are all the same; correspondingly, the areas occupied by different view field partitions are also the same, so that the depth information processing method splices and integrates the multiple partition depth information acquired at different moments into the complete depth information of the target view field. In other words, all the light source partitions are arranged in n rows and m columns, and the light source partitions have a rectangular shape, for example, in an example of the present invention, the light source unit may be uniformly divided into 2 × 2 rectangular light source partitions; of course, in other examples of the invention, the light source unit may also be evenly divided into rectangular light source partitions such as 4 × 4, 2 × 6, 1 × 12, or the like.
Exemplarily, as shown in fig. 2 and 3, taking 2 × 2 light source partitions as an example, the light source unit 10 of the partition TOF includes four light source partitions 11, wherein each of the light source partitions 11 may include a plurality of point light sources 111 distributed in an array. When the four light source sections 11 are sequentially turned on to emit light signals, the light signals emitted at different times will pass through different portions of the dodging unit 20 of the direct ranging TOF to be dodged, and then propagate to the corresponding field section 31 in the target field 30 to be reflected back to the receiving unit (not shown in the figure) of the section TOF; finally, the optical signals reflected from different field partitions 31 are received via the receiving unit at different times to determine the distance of the corresponding field partition 31 by measuring the time of flight of the optical signals, i.e. to obtain the corresponding partition depth information.
In particular, the light source unit 10 of the section TOF may be, but is not limited to being, implemented as a vertical-cavity surface-emitting laser (VCSE L for short), and the light unifying unit 20 of the section TOF may be, but is not limited to being, implemented as a light unifying device such as a random regular microlens array or a diffractive optical element.
It is noted that in another example of the present invention, the light source unit of the TOF may also be non-uniformly divided into n × m light source partitions, so that the areas occupied by different light source partitions are not necessarily the same; accordingly, the areas occupied by the different field-of-view partitions are not necessarily the same, which is helpful for dividing the different field-of-view partitions according to a specific detection scene.
In addition, in practical applications, such as the AR or VR application field, besides the depth information of the target field of view, color information of the target field of view is also required to meet the requirements of subsequent three-dimensional scene modeling or information identification. Therefore, as shown in fig. 1, the depth information processing method of the present invention may further include the steps of:
s130: acquiring color information of the target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module; and
s140: and processing the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target field of view.
Preferably, the conventional camera module may be implemented as an RGB camera to completely photograph the target view field through the RGB camera to obtain an RGB image of the target view field (i.e., color information of the target view field). In this way, the depth information processing method of the present invention can fuse the complete depth information to the color information based on the external parameters (i.e., the relative pose between the conventional camera module and the sectional TOF, etc.) between the conventional camera module and the sectional TOF to obtain a three-dimensional color image of the target field of view.
It should be noted that, if the conventional camera module shoots the target field of view to obtain the corresponding color information while the light source section of the light source unit of the section TOF emits light signals. This may cause the conventional camera module to be interfered by the optical signal emitted by the segmented TOF, resulting in color difference of the obtained color information, thereby affecting the subsequent requirements and effects of applications such as three-dimensional scene restoration or AR/VR.
Therefore, in this embodiment of the present invention, the color information is preferably obtained by capturing the target field of view through the conventional camera module at the time of detecting the gap of the field of view zone TOF by the zone TOF, so as to avoid the interference of the conventional camera module with the zone TOF, and further ensure the accuracy of the color information. Of course, in other examples of the present invention, the color information may also be obtained by shooting the target field of view via the conventional camera module before or after the sectional TOF detects the target field of view, so as to also avoid the conventional camera module from being interfered by the sectional TOF.
Illustratively, the depth information processing method of the present invention, before the step S120, further includes the steps of:
and respectively carrying out distortion correction on the corresponding subarea depth information to obtain corrected subarea depth information, and further carrying out splicing processing on the corrected subarea depth information to obtain integral depth information with higher quality.
It is understood that the corrected partition depth information obtained by performing distortion correction on the partition depth information by the depth information processing method of the present invention can completely correspond to the field partition, so that accurate splicing processing can be performed subsequently. In addition, the distortion correction method adopted by the depth information processing method of the invention can be that the distortion degrees of different subareas are obtained through system calibration, and then the distortion correction process is realized through reverse correction. In other words, in the above example of the present invention, the depth information processing method is implemented by a pure software manner to correct the corresponding distortion, which helps to significantly improve the uniformity and window efficiency of the target region.
It is worth mentioning that although the depth information processing method according to the above-described first embodiment of the present invention is capable of performing a fusion process on the divisional depth information obtained via the divisional TOF and color information obtained via the conventional camera module to obtain a three-dimensional color image of the target field of view; however, since the depth information processing method performs the fusion processing on the depth information and the color information after the TOF first obtains all the partition depth information to splice the partition depth information into the complete depth information, the depth information processing method consumes a long time, which is not favorable for meeting the real-time requirement of products such as AR/VR at present. Therefore, in order to shorten the time taken to process depth information, as shown in fig. 4, the depth information processing method according to the second embodiment of the present invention may include the steps of:
s210: acquiring color information of a target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module;
s220: acquiring a plurality of pieces of divisional depth information, wherein the plurality of pieces of divisional depth information are obtained by a sectional TOF detecting a plurality of field-of-view divisional regions respectively at a timing, and the plurality of field-of-view divisional regions collectively form the target field-of-view; and
s230: and sequentially fusing the plurality of subarea depth information with the color information to obtain a three-dimensional color image of the target view field.
It should be noted that, since the segment depth information is obtained by the segment TOF respectively detecting the field of view segments at a certain time sequence, so that different segment depth information is obtained at different times, after at least one segment depth information is obtained and before the last segment depth information is obtained, the obtained segment depth information and the color information can be fused to reduce the overall use of the depth information processing method, thereby improving the processing efficiency of the depth information processing method.
In other words, in the step S230, the corresponding partition depth information is sequentially fused to the color information according to the detection timing sequence of the partition TOF on the field of view partition, so as to obtain the three-dimensional color image. That is, the different partition depth information is time-divisionally fused to the color information to obtain the three-dimensional color image, contributing to reducing the overall time consumption required for the depth information processing method.
Preferably, in the step S230, immediately after each of the divisional depth information is obtained by the divisional TOF, the corresponding divisional depth information is subjected to a fusion process with the color information to obtain the three-dimensional color image after the last obtained one of the divisional depth information is fused to the color information.
Exemplarily, after a certain field of view is detected by the section TOF to obtain corresponding section depth information, the depth information processing method directly performs a fusion process on the section depth information and the color information to obtain a local three-dimensional color image (i.e., color information with partial depth information); at the same time, the next field of view partition is probed by the partition TOF to obtain the next partition depth information. Then, continuing to perform fusion processing on the depth information of the next subarea and the local three-dimensional color image to obtain another local three-dimensional color image; and repeating the steps until the last market partition is detected through the partition TOF to obtain the last partition depth information, and fusing the last partition depth information and the local three-dimensional color image to obtain the three-dimensional color image. It can be understood that, just as the depth information processing method of this embodiment of the present invention performs the acquisition and fusion of the partition depth information at the same time, the time required by the depth information processing method of the present invention is greatly shortened, so that the data processing efficiency is significantly improved.
Illustrative System
Referring to fig. 5 of the drawings accompanying the specification, a depth information processing system for processing segmented depth information obtained by segmented detection of the entire target field of view via segmented TOF according to an embodiment of the present invention is illustrated. Specifically, as shown in fig. 5, the depth information processing system 400 includes a depth information acquiring module 410 and a stitching processing module 420 communicably connected to each other. The depth information acquiring module 410 is configured to acquire a plurality of sectional depth information, where the plurality of sectional depth information is obtained by detecting a plurality of field sections by the sectional TOF respectively according to a certain time sequence, and the plurality of field sections together form a complete target field. The stitching processing module 420 is configured to stitch and process the depth information of the multiple partitions according to the partition arrangement of the multiple view partitions, so as to obtain complete depth information corresponding to the target view.
It should be noted that, in the above-mentioned embodiment of the present invention, as shown in fig. 5, the depth information processing system 400 may further include a color information obtaining module 430 and a simultaneous fusion processing module 440, which are communicatively connected to each other, wherein the color information obtaining module 430 is used for obtaining color information of the target field of view, wherein the color information is obtained by capturing the target field of view by a conventional camera module; wherein the simultaneous fusion processing module 440 is configured to process the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target field of view.
In an example of the present invention, as shown in fig. 5, the depth information processing system 400 may further include an aberration correction module 450, configured to perform aberration correction on the corresponding partition depth information to obtain corrected partition depth information.
According to another aspect of the present invention, as shown in fig. 6, another embodiment of the present invention also provides a depth information processing system 500, wherein the depth information processing system 500 includes a color information acquisition module 510, a depth information acquisition module 520, and a time-sharing fusion processing module 530 communicably connected to each other, wherein the color information acquisition module 510 is configured to acquire color information of a target field of view, wherein the color information is obtained by photographing the target field of view by a conventional camera module; the depth information acquiring module 520 is configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by detecting a plurality of field partitions respectively according to a certain time sequence by the partition TOF, and the plurality of field partitions together form the target field; the time-sharing fusion processing module 530 is configured to sequentially perform fusion processing on the plurality of pieces of partition depth information and the color information to obtain a three-dimensional color image of the target field of view.
Illustrative electronic device
Next, an electronic apparatus according to an embodiment of the present invention is described with reference to fig. 7. As shown in fig. 7, the electronic device 90 includes one or more processors 91 and memory 92.
The processor 91 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 90 to perform desired functions. In other words, the processor 91 comprises one or more physical devices configured to execute instructions. For example, the processor 91 may be configured to execute instructions that are part of: one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, implement a technical effect, or otherwise arrive at a desired result.
The processor 91 may include one or more processors configured to execute software instructions. Additionally or alternatively, the processor 91 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the processor 91 may be single core or multicore, and the instructions executed thereon may be configured for serial, parallel, and/or distributed processing. The various components of the processor 91 may optionally be distributed over two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the processor 91 may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
The memory 92 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to implement some or all of the steps of the above-described exemplary methods of the present invention described above, and/or other desired functions.
In other words, the memory 92 comprises one or more physical devices configured to hold machine-readable instructions executable by the processor 91 to implement the methods and processes described herein. In implementing these methods and processes, the state of the memory 92 may be transformed (e.g., to hold different data). The memory 92 may include removable and/or built-in devices. The memory 92 may include optical memory (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The memory 92 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
However, aspects of the instructions described herein may alternatively be propagated by a communication medium (e.g., electromagnetic signals, optical signals, etc.) that is not held by a physical device for a limited period of time.
In one example, as shown in FIG. 7, the electronic device 90 may also include an input device 93 and an output device 94, which may be interconnected via a bus system and/or other form of connection mechanism (not shown). The input device 93 may be, for example, a camera module or the like for capturing image data or video data. As another example, the input device 93 may include or interface with one or more user input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input device 93 may include or interface with a selected Natural User Input (NUI) component. Such component parts may be integrated or peripheral and the transduction and/or processing of input actions may be processed on-board or off-board. Example NUI components may include a microphone for speech and/or voice recognition; infrared, color, stereo display and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer and/or gyroscope for motion detection and/or intent recognition; and an electric field sensing component for assessing brain activity and/or body movement; and/or any other suitable sensor.
The output device 94 may output various information including the classification result and the like to the outside. The output devices 94 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, the electronic device 90 may further comprise the communication means, wherein the communication means may be configured to communicatively couple the electronic device 90 with one or more other computer devices. The communication means may comprise wired and/or wireless communication devices compatible with one or more different communication protocols. As a non-limiting example, the communication subsystem may be configured for communication via a wireless telephone network or a wired or wireless local or wide area network. In some embodiments, the communications device may allow the electronic device 90 to send and/or receive messages to and/or from other devices via a network such as the internet.
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Also, the order of the above-described processes may be changed.
Of course, for simplicity, only some of the components of the electronic device 90 relevant to the present invention are shown in fig. 7, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 90 may include any other suitable components, depending on the particular application.
According to another aspect of the present invention, an embodiment of the present invention further provides another electronic device. Illustratively, as shown in fig. 8, the electronic device includes an electronic device body 800, at least one section TOF700, and at least one depth information processing system 400 as described above, wherein the section TOF700 is configured on the electronic device body 800, and is used for section detection of a target field of view by the section TOF 700; wherein the depth information processing system 400 is configured to the electronic device body 800, and the depth information processing system 400 includes, communicatively connected to each other: a depth information acquiring module, configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by the partition TOF respectively detecting a plurality of field partitions according to a certain time sequence, and the plurality of field partitions together form the complete target field; and the splicing processing module is used for splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of view field subareas so as to obtain the complete depth information corresponding to the target view field. It is understood that in other examples of the present invention, the direct ranging TOF zone detection system 400 can also be directly configured to the direct ranging TOF700, and the electronic device body 800 is implemented as a companion device to the direct ranging TOF700, that is, the electronic device can be directly implemented as a TOF product with a zone detection function.
In this example of the present invention, as shown in fig. 8, the electronic device further includes a conventional camera module 600, wherein the conventional camera module 600 is configured on the electronic device body 800 for shooting the target field of view through the conventional camera module 600; wherein the depth information processing system 400 further comprises a color information obtaining module and a simultaneous fusion processing module, which are communicably connected to each other, wherein the color information obtaining module is used for obtaining color information of the target field of view, wherein the color information is obtained by photographing the target field of view by the conventional camera module; the simultaneous fusion processing module is used for processing the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target view field.
It should be noted that in another example of the present invention, as shown in fig. 8, the electronic device may also include the depth information processing system 500, wherein the depth information processing system 500 is configured on the electronic device body 800, and the depth information processing system 500 includes: the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is obtained by shooting the target view field through a conventional camera module; a depth information acquisition module, configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by detecting a plurality of field partitions by a partition TOF respectively according to a certain time sequence, and the plurality of field partitions together form the target field; and a time division fusion processing module used for sequentially fusing the plurality of subarea depth information with the color information to obtain a three-dimensional color image of the target view field.
Notably, the electronic device body 800 can be any device or system capable of being configured with the segmented TOF700 and the direct ranging TOF segmented detection system 400, such as glasses, head mounted display devices, augmented reality devices, virtual reality devices, smart phones, or mixed reality devices. It will be understood by those skilled in the art that although the electronic device body 800 is implemented as AR glasses in fig. 8, it does not limit the content and scope of the present invention.
It should also be noted that in the apparatus, devices and methods of the present invention, the components or steps may be broken down and/or re-combined. These decompositions and/or recombinations are to be regarded as equivalents of the present invention.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (19)

1. A depth information processing method characterized by comprising the steps of:
acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of field subareas according to a certain time sequence through a subarea TOF, and the field subareas jointly form a complete target field; and
and splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of field subareas to obtain complete depth information corresponding to the target field.
2. The depth information processing method according to claim 1, wherein the sectional TOF detects different sectional regions of the field of view at a timing to obtain different sectional depth information at different times.
3. The depth information processing method according to claim 2, wherein the light source unit of the zone TOF is divided into a plurality of light source zones arranged specifically, wherein the light source zones correspond one-to-one to the field of view zones, and the light source zones are lit zone by zone at a timing to illuminate the corresponding field of view zones.
4. The depth information processing method according to any one of claims 1 to 3, further comprising the steps of:
acquiring color information of the target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module; and
and processing the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target field of view.
5. The depth information processing method according to claim 4, wherein the normal camera module is an RGB camera.
6. The depth information processing method according to claim 4, wherein in the acquiring color information of the target field of view, which is obtained by photographing the target field of view by a conventional camera module, the step of:
the color information is obtained by shooting the target field of view through the conventional camera module at the time of TOF detecting the gaps of the field of view partitions.
7. The depth information processing method of claim 4, wherein in the step of fusion processing the full depth information and the color information to obtain a three-dimensional color image of the target field of view:
and fusing the complete depth information to the color information based on external parameters between the conventional camera module and the sectional TOF to obtain the three-dimensional color image.
8. The depth information processing method according to any one of claims 1 to 3, further comprising, before the step of processing the plurality of divisional depth information in a mosaic according to the divisional arrangement of the plurality of field-of-view divisions to obtain complete depth information corresponding to the target field-of-view, the steps of:
and respectively carrying out distortion correction on the corresponding subarea depth information to obtain the corrected subarea depth information.
9. A depth information processing method characterized by comprising the steps of:
acquiring color information of a target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module;
acquiring a plurality of pieces of zone depth information, wherein the plurality of pieces of zone depth information are obtained by detecting a plurality of field-of-view zones respectively by a zone TOF at a timing, and the plurality of field-of-view zones collectively form the target field of view; and
and respectively carrying out fusion processing on the plurality of subarea depth information and the color information to obtain a three-dimensional color image of the target view field.
10. The depth information processing method according to claim 9, wherein in the step of performing fusion processing on the plurality of pieces of the precinct depth information and the color information, respectively, to obtain the three-dimensional color image of the target field of view:
and sequentially fusing the corresponding subarea depth information to the color information according to the detection time sequence of the subarea TOF on the field of view subareas to obtain the three-dimensional color image.
11. The depth information processing method according to claim 10, wherein in the step of performing fusion processing on the plurality of pieces of the precinct depth information and the color information, respectively, to obtain the three-dimensional color image of the target field of view:
immediately after each of the divisional depth information is obtained by the divisional TOF, the corresponding divisional depth information is subjected to fusion processing with the color information to obtain the three-dimensional color image after the last obtained one of the divisional depth information is fused to the color information.
12. A depth information processing system, comprising, communicatively coupled to each other:
the depth information acquisition module is used for acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of field subareas according to a certain time sequence through the subarea TOF, and the field subareas jointly form a complete target field; and
and the splicing processing module is used for splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of view field subareas so as to obtain complete depth information corresponding to the target view field.
13. The depth information processing system of claim 12, further comprising communicatively coupled to each other:
the color information acquisition module is used for acquiring color information of the target view field, wherein the color information is obtained by shooting the target view field through a conventional camera module; and
and the simultaneous fusion processing module is used for processing the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target view field.
14. The depth information processing system of claim 12 or 13, further comprising an aberration correction module for performing aberration correction on the corresponding partition depth information to obtain corrected partition depth information.
15. A depth information processing system, comprising, communicatively coupled to each other:
the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is obtained by shooting the target view field through a conventional camera module;
a depth information acquisition module, configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by detecting a plurality of field partitions by a partition TOF respectively according to a certain time sequence, and the plurality of field partitions together form the target field; and
and the time-sharing fusion processing module is used for sequentially fusing the plurality of subarea depth information with the color information to obtain a three-dimensional color image of the target view field.
16. An electronic device, comprising:
at least one processor configured to execute instructions; and
a memory communicatively coupled to the at least one processor, wherein the memory has at least one instruction, wherein the instruction is executable by the at least one processor to cause the at least one processor to perform some or all of the steps of a depth information processing method, wherein the depth information processing method comprises the steps of:
acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of field subareas according to a certain time sequence through a subarea TOF, and the field subareas jointly form a complete target field; and
and splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of field subareas to obtain complete depth information corresponding to the target field.
17. An electronic device, comprising:
an electronic device body;
at least one section TOF, wherein the section TOF is configured to the electronic device body and is used for section detection of a target field of view through the section TOF; and
at least one depth information processing system, wherein the depth information processing system is configured on the electronic device body or the section TOF, and the depth information processing system comprises:
a depth information acquiring module, configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by the partition TOF respectively detecting a plurality of field partitions according to a certain time sequence, and the plurality of field partitions together form the complete target field; and
and the splicing processing module is used for splicing and processing the depth information of the plurality of subareas according to the subarea arrangement of the plurality of view field subareas so as to obtain complete depth information corresponding to the target view field.
18. The electronic device of claim 17, further comprising a conventional camera module, wherein the conventional camera module is disposed on the electronic device body and is used for capturing the target view field through the conventional camera module; the depth information processing system further comprises a color information acquisition module and a simultaneous fusion processing module which are mutually connected in a communication way, wherein the color information acquisition module is used for acquiring color information of the target field of view, and the color information is obtained by shooting the target field of view through the conventional camera module; the simultaneous fusion processing module is used for processing the complete depth information and the color information in a fusion manner to obtain a three-dimensional color image of the target view field.
19. An electronic device, comprising:
an electronic device body;
the electronic equipment comprises an electronic equipment body, at least one conventional camera module, a camera module and a camera module, wherein the conventional camera module is arranged on the electronic equipment body and is used for shooting a target field of view through the conventional camera module;
at least one section TOF, wherein the section TOF is configured on the electronic device body and is used for detecting the target field of view through the section TOF; and
at least one depth information processing system, wherein the depth information processing system is configured on the electronic device body or the section TOF, and the depth information processing system comprises:
the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is obtained by shooting the target view field through a conventional camera module;
a depth information acquisition module, configured to acquire a plurality of pieces of partition depth information, where the plurality of pieces of partition depth information are obtained by detecting a plurality of field partitions by a partition TOF respectively according to a certain time sequence, and the plurality of field partitions together form the target field; and
and the time-sharing fusion processing module is used for sequentially fusing the plurality of subarea depth information with the color information to obtain a three-dimensional color image of the target view field.
CN202010425515.1A 2020-02-01 2020-05-19 Depth information processing method and system and electronic equipment Active CN111458692B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010077893 2020-02-01
CN2020100778935 2020-02-01

Publications (2)

Publication Number Publication Date
CN111458692A true CN111458692A (en) 2020-07-28
CN111458692B CN111458692B (en) 2023-08-25

Family

ID=71209629

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202010425531.0A Pending CN111458693A (en) 2020-02-01 2020-05-19 Direct ranging TOF (time of flight) partitioned detection method and system and electronic equipment thereof
CN202010425515.1A Active CN111458692B (en) 2020-02-01 2020-05-19 Depth information processing method and system and electronic equipment
CN202010426643.8A Pending CN111366906A (en) 2020-02-01 2020-05-19 Projection apparatus and segmented TOF apparatus, manufacturing method thereof, and electronic apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010425531.0A Pending CN111458693A (en) 2020-02-01 2020-05-19 Direct ranging TOF (time of flight) partitioned detection method and system and electronic equipment thereof

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010426643.8A Pending CN111366906A (en) 2020-02-01 2020-05-19 Projection apparatus and segmented TOF apparatus, manufacturing method thereof, and electronic apparatus

Country Status (1)

Country Link
CN (3) CN111458693A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112764234A (en) * 2020-08-11 2021-05-07 上海鲲游光电科技有限公司 Optical field modulator and modulation method thereof
WO2022052486A1 (en) * 2020-09-14 2022-03-17 上海鲲游光电科技有限公司 Optical processing assembly, tof transmitting device, and tof depth information detector
WO2022111501A1 (en) * 2020-11-27 2022-06-02 宁波飞芯电子科技有限公司 Distance information acquisition system
CN114615397A (en) * 2020-12-09 2022-06-10 华为技术有限公司 TOF device and electronic apparatus

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021238678A1 (en) * 2020-05-27 2021-12-02 杭州驭光光电科技有限公司 Diffractive optical element, partitioned uniform light projection system, electronic device and design method
CN112965073A (en) * 2021-02-05 2021-06-15 上海鲲游科技有限公司 Partition projection device and light source unit and application thereof
CN112946604A (en) * 2021-02-05 2021-06-11 上海鲲游科技有限公司 dTOF-based detection device and electronic device and application thereof
US20220412729A1 (en) * 2021-06-25 2022-12-29 Himax Technologies Limited Dot pattern projector for use in three-dimensional distance measurement system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130321790A1 (en) * 2012-06-02 2013-12-05 Richard Kirby Three dimensional surface mapping system using optical flow
CN104299261A (en) * 2014-09-10 2015-01-21 深圳大学 Three-dimensional imaging method and system for human body
CN105005089A (en) * 2015-06-08 2015-10-28 上海交通大学 Airport foreign object debris detection system and method based on computer vision
WO2016056317A1 (en) * 2014-10-08 2016-04-14 ソニー株式会社 Information processor and information-processing method
CN106447677A (en) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 Image processing method and apparatus thereof
CN106574964A (en) * 2014-12-22 2017-04-19 谷歌公司 Integrated camera system having two dimensional image capture and three dimensional time-of-flight capture with a partitioned field of view
CN106612387A (en) * 2015-10-15 2017-05-03 杭州海康威视数字技术股份有限公司 Combined depth map acquisition method and depth camera
CN109427046A (en) * 2017-08-30 2019-03-05 深圳中科飞测科技有限公司 Distortion correction method, device and the computer readable storage medium of three-dimensional measurement
CN110300292A (en) * 2018-03-22 2019-10-01 深圳光峰科技股份有限公司 Projection distortion bearing calibration, device, system and storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2932562B1 (en) * 2008-06-12 2010-08-27 Univ Pasteur LIGHT PROJECTION DEVICE STRUCTURED BY MEANS OF VCSEL AND PHASE DIFFRACTIVE OPTICAL COMPONENTS.
US9635231B2 (en) * 2014-12-22 2017-04-25 Google Inc. Time-of-flight camera system and method to improve measurement quality of weak field-of-view signal regions
US9674415B2 (en) * 2014-12-22 2017-06-06 Google Inc. Time-of-flight camera system with scanning illuminator
US9946089B2 (en) * 2015-10-21 2018-04-17 Princeton Optronics, Inc. Generation of coded structured light patterns using VCSEL arrays
CN107424188B (en) * 2017-05-19 2020-06-30 深圳奥比中光科技有限公司 Structured light projection module based on VCSEL array light source
US11126060B2 (en) * 2017-10-02 2021-09-21 Liqxtal Technology Inc. Tunable light projector
CN208110250U (en) * 2018-04-16 2018-11-16 深圳奥比中光科技有限公司 Pattern projector and depth camera
CN109086694B (en) * 2018-07-17 2024-01-19 北京量子光影科技有限公司 Face recognition system and method
CN109343070A (en) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 Time flight depth camera
CN109917352A (en) * 2019-04-19 2019-06-21 上海禾赛光电科技有限公司 The design method of laser radar and its emission system, the emission system of laser radar
CN110275381B (en) * 2019-06-26 2021-09-21 业成科技(成都)有限公司 Structural light emission module and depth sensing equipment using same
CN110609293B (en) * 2019-09-19 2022-05-27 深圳奥锐达科技有限公司 Distance detection system and method based on flight time

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130321790A1 (en) * 2012-06-02 2013-12-05 Richard Kirby Three dimensional surface mapping system using optical flow
CN104299261A (en) * 2014-09-10 2015-01-21 深圳大学 Three-dimensional imaging method and system for human body
WO2016056317A1 (en) * 2014-10-08 2016-04-14 ソニー株式会社 Information processor and information-processing method
CN106574964A (en) * 2014-12-22 2017-04-19 谷歌公司 Integrated camera system having two dimensional image capture and three dimensional time-of-flight capture with a partitioned field of view
CN105005089A (en) * 2015-06-08 2015-10-28 上海交通大学 Airport foreign object debris detection system and method based on computer vision
CN106612387A (en) * 2015-10-15 2017-05-03 杭州海康威视数字技术股份有限公司 Combined depth map acquisition method and depth camera
CN106447677A (en) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 Image processing method and apparatus thereof
CN109427046A (en) * 2017-08-30 2019-03-05 深圳中科飞测科技有限公司 Distortion correction method, device and the computer readable storage medium of three-dimensional measurement
CN110300292A (en) * 2018-03-22 2019-10-01 深圳光峰科技股份有限公司 Projection distortion bearing calibration, device, system and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112764234A (en) * 2020-08-11 2021-05-07 上海鲲游光电科技有限公司 Optical field modulator and modulation method thereof
WO2022033025A1 (en) * 2020-08-11 2022-02-17 上海鲲游光电科技有限公司 Light field modulator and modulation method thereof
WO2022052486A1 (en) * 2020-09-14 2022-03-17 上海鲲游光电科技有限公司 Optical processing assembly, tof transmitting device, and tof depth information detector
WO2022111501A1 (en) * 2020-11-27 2022-06-02 宁波飞芯电子科技有限公司 Distance information acquisition system
CN114615397A (en) * 2020-12-09 2022-06-10 华为技术有限公司 TOF device and electronic apparatus

Also Published As

Publication number Publication date
CN111458693A (en) 2020-07-28
CN111366906A (en) 2020-07-03
CN111458692B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN111458692A (en) Depth information processing method and system and electronic equipment
EP3673460B1 (en) Depth map with structured and flood light
US11328446B2 (en) Combining light-field data with active depth data for depth map generation
EP3422699B1 (en) Camera module and control method
JP5965404B2 (en) Customizing user-specific attributes
US9945936B2 (en) Reduction in camera to camera interference in depth measurements using spread spectrum
US20190068853A1 (en) Structured light and flood fill light illuminator
US20140214415A1 (en) Using visual cues to disambiguate speech inputs
KR20160108388A (en) Eye gaze detection with multiple light sources and sensors
US20140218291A1 (en) Aligning virtual camera with real camera
CN112005548A (en) Method of generating depth information and electronic device supporting the same
CN111123912A (en) Calibration method and device for travelling crane positioning coordinates
US20170018114A1 (en) Video imaging to assess specularity
TW202301272A (en) Distributed depth data processing
US20190325600A1 (en) Determining a pose of a handheld object
WO2020057365A1 (en) Method, system, and computer-readable medium for generating spoofed structured light illuminated face
WO2020019682A1 (en) Laser projection module, depth acquisition apparatus and electronic device
EP4120200A1 (en) Method and apparatus for light estimation
WO2019200577A1 (en) Image acquisition device and image acquisition method
US10877597B2 (en) Unintended touch rejection
EP3951426A1 (en) Electronic device and method for compensating for depth error according to modulation frequency
CN112954153A (en) Camera device, electronic equipment, depth of field detection method and depth of field detection device
KR20220015752A (en) An electronic device including a range sensor and a method for performing auto focus
JP2017125764A (en) Object detection apparatus and image display device including the same
US20240031550A1 (en) System and method of image rendering quality prediction and path planning for large-scale scenes, and computer device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230727

Address after: 200120 Shanghai Pudong New Area China (Shanghai) Free Trade Pilot Zone Lingang New Area, No. 2699 Jiangshan Road, Building 4, West Area

Applicant after: Shanghai kunyou Technology Co.,Ltd.

Address before: 201203 Room 201, 518 Bibo Road, Pudong New Area Free Trade Zone, Shanghai

Applicant before: SHANGHAI NORTH OCEAN PHOTONICS Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant