CN111458692B - Depth information processing method and system and electronic equipment - Google Patents

Depth information processing method and system and electronic equipment Download PDF

Info

Publication number
CN111458692B
CN111458692B CN202010425515.1A CN202010425515A CN111458692B CN 111458692 B CN111458692 B CN 111458692B CN 202010425515 A CN202010425515 A CN 202010425515A CN 111458692 B CN111458692 B CN 111458692B
Authority
CN
China
Prior art keywords
depth information
subareas
view
subarea
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010425515.1A
Other languages
Chinese (zh)
Other versions
CN111458692A (en
Inventor
孟玉凰
黄河
楼歆晔
林涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Kunyou Technology Co ltd
Original Assignee
Shanghai Kunyou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Kunyou Technology Co ltd filed Critical Shanghai Kunyou Technology Co ltd
Publication of CN111458692A publication Critical patent/CN111458692A/en
Application granted granted Critical
Publication of CN111458692B publication Critical patent/CN111458692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4804Auxiliary means for detecting or identifying lidar signals or the like, e.g. laser illuminators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone

Abstract

A depth information processing method, a system and an electronic device thereof. The depth information processing method includes the steps of: acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of view field subareas according to a certain time sequence by a subarea TOF, and the view field subareas jointly form a complete target view field; and processing the plurality of partition depth information in a spliced manner according to the partition arrangement of the plurality of view field partitions so as to obtain complete depth information corresponding to the target view field.

Description

Depth information processing method and system and electronic equipment
Technical Field
The invention relates to the technical field of TOF, in particular to a depth information processing method, a depth information processing system and electronic equipment.
Background
Currently, in the mainstream scheme of the three-dimensional sensing technology, the TOF (time of flight) technology has been widely focused and applied by industries such as smart phones and the like by virtue of advantages of small volume, low error, direct output of depth data, strong anti-interference performance and the like. From a technical implementation, there are two types of TOF: one is direct ranging TOF (dTOF for short), i.e. determining distance by transmitting, receiving light and measuring photon time of flight; the other is the well-established indirect ranging TOF (iTOF) in the market, i.e. the distance is determined by measuring the phase difference between the transmitted and received waveforms and scaling the time of flight. The direct ranging method is characterized in that the light is transmitted after high-frequency modulation, the pulse repetition frequency is very high, the pulse width can reach ns to ps orders, high single pulse energy can be obtained in extremely short time, the signal to noise ratio can be increased while the low power consumption of a power supply is maintained, the remote detection distance can be realized, the influence of ambient light on the ranging precision is reduced, and the requirements on the sensitivity and the signal to noise ratio of a detection device are reduced. In addition, the high frequency and narrow pulse width characteristics of the direct ranging TOF enable the average energy to be small, and the eye safety can be ensured.
However, the detection distance of the existing direct ranging TOF is proportional to the power consumption, that is, the further the detection distance of the direct ranging TOF is, the higher the power consumption is, which makes the existing direct ranging TOF have to configure a higher power light source in order to realize the remote detection to meet the requirements of application scenarios such as VR/AR, so that the existing direct ranging TOF tends to perform the short-distance and remote detection under the condition of higher power consumption, thereby causing resource waste and affecting the application and popularization of the TOF technology.
Disclosure of Invention
An advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, which can process a plurality of partitioned depth information acquired by partitioned TOF so as to obtain complete depth information of a target field of view, thereby realizing remote detection of the partitioned TOF with lower power consumption.
Another advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the depth information processing method can combine and process depth information of different field of view partitions acquired at different times into depth information of the entire target field of view, so as to allow the partition TOF to detect only smaller field of view partitions at the same time, which is helpful for reducing power consumption required for remote detection.
Another advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the depth information processing method can further fuse depth information acquired by partitioned TOF into a two-dimensional color image to obtain a three-dimensional color image, so as to be widely applied in AR/VR.
Another advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, wherein in an embodiment of the present invention, the depth information processing method can improve data processing efficiency and shorten processing time, so as to meet real-time requirements of current electronic devices such as AR/VR products.
Another advantage of the present invention is to provide a depth information processing method, a system and an electronic device thereof, wherein in order to achieve the above advantages, a complex structure and a huge amount of calculation are not required in the present invention, and the requirements on software and hardware are low. Therefore, the present invention successfully and effectively provides a solution that not only provides a depth information processing method and system and electronic device thereof, but also increases the practicality and reliability of the depth information processing method and system and electronic device thereof.
To achieve at least one of the above or other advantages and objects, the present invention provides a depth information processing method including the steps of:
acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of view field subareas according to a certain time sequence by a subarea TOF, and the view field subareas jointly form a complete target view field; and
and processing the plurality of partition depth information in a splicing manner according to partition arrangement of the plurality of view field partitions so as to obtain complete depth information corresponding to the target view field.
In one embodiment of the invention, the segmented TOF detects different segments of the field of view at a timing sequence to obtain different depth information of the segments at different times.
In an embodiment of the present invention, the light source unit of the segmented TOF is divided into a plurality of light source segments of a specific arrangement, wherein the light source segments are in one-to-one correspondence with the field of view segments, and the light source segments are locally illuminated at a timing to illuminate the corresponding field of view segments.
In an embodiment of the present invention, the depth information processing method further includes the steps of:
Acquiring color information of the target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module; and
and processing the complete depth information and the color information in a fusion way to obtain a three-dimensional color image of the target field of view.
In an embodiment of the present invention, the conventional camera module is an RGB camera.
In an embodiment of the present invention, in the step of acquiring color information of the target field of view, wherein the color information is obtained by photographing the target field of view through a conventional camera module:
the color information is obtained by detecting gaps of the field of view partition in the partition TOF and shooting the target field of view through the conventional camera module.
In an embodiment of the invention, in the step of processing the complete depth information and the color information in fusion to obtain a three-dimensional color image of the target field of view:
and fusing the complete depth information to the color information based on external parameters between the conventional camera module and the partitioned TOF so as to obtain the three-dimensional color image.
In an embodiment of the present invention, before the step of processing the plurality of partition depth information in a spliced manner according to the partition arrangement of the plurality of field of view partitions to obtain the complete depth information corresponding to the target field of view, the method further includes the steps of:
And respectively carrying out distortion correction on the corresponding partition depth information to obtain corrected partition depth information.
According to another aspect of the present invention, there is further provided a depth information processing method including the steps of:
acquiring color information of a target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module;
acquiring a plurality of partition depth information, wherein the plurality of partition depth information is obtained by detecting a plurality of field-of-view partitions respectively by partition TOF according to a certain time sequence, and the plurality of field-of-view partitions jointly form the target field of view; and
and respectively carrying out fusion processing on the plurality of partition depth information and the color information to obtain a three-dimensional color image of the target view field.
In an embodiment of the present invention, in the step of fusing the plurality of partition depth information with the color information to obtain the three-dimensional color image of the target field of view, respectively:
and according to the detection time sequence of the partitioning TOF on the visual field partition, sequentially fusing the corresponding partition depth information to the color information to obtain the three-dimensional color image.
In an embodiment of the present invention, in the step of fusing the plurality of partition depth information with the color information to obtain the three-dimensional color image of the target field of view, respectively:
immediately after each of the divisional depth information is obtained by the divisional TOF, the corresponding divisional depth information is subjected to a fusion process with the color information to obtain the three-dimensional color image after one of the divisional depth information finally obtained is fused to the color information.
According to another aspect of the present invention, there is further provided a depth information processing system including:
the depth information acquisition module is used for acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of view field subareas according to a certain time sequence by a subarea TOF, and the view field subareas jointly form a complete target view field; and
and the splicing processing module is used for splicing the plurality of subarea depth information according to the subarea arrangement of the plurality of view field subareas so as to obtain the complete depth information corresponding to the target view field.
In one embodiment of the present invention, the depth information processing system further includes:
The color information acquisition module is used for acquiring color information of the target view field, wherein the color information is acquired by shooting the target view field through a conventional camera module; and
and the simultaneous fusion processing module is used for processing the complete depth information and the color information in a fusion manner so as to obtain a three-dimensional color image of the target view field.
In an embodiment of the present invention, the depth information processing system further includes a distortion correction module, configured to correct the distortion of the corresponding partition depth information, so as to obtain corrected partition depth information.
According to another aspect of the present invention, there is further provided a depth information processing system including:
the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is acquired by shooting the target view field through a conventional camera module;
a depth information acquisition module for acquiring a plurality of partition depth information, wherein the plurality of partition depth information is obtained by detecting a plurality of field of view partitions respectively by partition TOF according to a certain time sequence, and the plurality of field of view partitions jointly form the target field of view; and
And the time-sharing fusion processing module is used for sequentially carrying out fusion processing on the plurality of subarea depth information and the color information so as to obtain a three-dimensional color image of the target view field.
According to another aspect of the present invention, there is further provided an electronic apparatus including:
at least one processor for executing instructions; and
a memory communicatively coupled to the at least one processor, wherein the memory has at least one instruction, wherein the instruction is executed by the at least one processor to cause the at least one processor to perform some or all of the steps in a depth information processing method, wherein the depth information processing method comprises the steps of:
acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of view field subareas according to a certain time sequence by a subarea TOF, and the view field subareas jointly form a complete target view field; and
and processing the plurality of subarea depth information in a splicing manner according to subarea arrangement of the plurality of visual field subareas so as to obtain complete depth information corresponding to the target visual field.
According to another aspect of the present invention, there is further provided an electronic apparatus including:
An electronic device body;
at least one segmented TOF, wherein the segmented TOF is configured to the electronic device body for segmented detection of a target field of view by the segmented TOF; and
at least one depth information processing system, wherein the depth information processing system is configured to the electronic device body or the partitioned TOF, and the depth information processing system comprises communicatively connected to each other:
a depth information obtaining module, configured to obtain a plurality of partition depth information, where the plurality of partition depth information is obtained by detecting a plurality of field of view partitions by the partition TOF according to a certain timing sequence, and the plurality of field of view partitions together form a complete target field of view; and
and the splicing processing module is used for splicing the plurality of subarea depth information according to the subarea arrangement of the plurality of view field subareas so as to obtain the complete depth information corresponding to the target view field.
In an embodiment of the invention, the electronic device further includes a conventional camera module, wherein the conventional camera module is configured on the electronic device body, and is configured to capture the target field of view through the conventional camera module; the depth information processing system further comprises a color information acquisition module and a synchronous fusion processing module which are mutually and communicatively connected, wherein the color information acquisition module is used for acquiring color information of the target field of view, and the color information is acquired by shooting the target field of view through the conventional camera module; the simultaneous fusion processing module is used for processing the complete depth information and the color information in a fusion mode to obtain a three-dimensional color image of the target view field.
According to another aspect of the present invention, there is further provided an electronic apparatus including:
an electronic device body;
at least one conventional camera module, wherein the conventional camera module is configured on the electronic equipment body and is used for shooting a target field of view through the conventional camera module;
at least one segmented TOF, wherein the segmented TOF is configured to the electronic device body for detecting the target field of view by the segmented TOF; and
at least one depth information processing system, wherein the depth information processing system is configured to the electronic device body or the partitioned TOF, and the depth information processing system comprises communicatively connected to each other:
the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is acquired by shooting the target view field through a conventional camera module;
a depth information acquisition module for acquiring a plurality of partition depth information, wherein the plurality of partition depth information is obtained by detecting a plurality of field of view partitions respectively by partition TOF according to a certain time sequence, and the plurality of field of view partitions jointly form the target field of view; and
And the time-sharing fusion processing module is used for sequentially carrying out fusion processing on the plurality of subarea depth information and the color information so as to obtain a three-dimensional color image of the target view field.
Further objects and advantages of the present invention will become fully apparent from the following description and the accompanying drawings.
These and other objects, features and advantages of the present invention will become more fully apparent from the following detailed description, the accompanying drawings and the appended claims.
Drawings
Fig. 1 is a flow chart of a depth information processing method according to a first embodiment of the present invention.
Fig. 2 shows an example of the zone illumination of the zone TOF in the above-described first embodiment of the invention.
Fig. 3 shows an example of light source segmentation of the segmented TOF according to the first embodiment of the invention described above.
Fig. 4 is a flowchart of a depth information processing method according to a second embodiment of the present invention.
Fig. 5 shows a block diagram schematic of a depth information processing system according to an embodiment of the invention.
Fig. 6 shows a block diagram schematic of a depth information processing system according to another embodiment of the present invention.
Fig. 7 shows a block diagram schematic of an electronic device according to an embodiment of the invention.
Fig. 8 shows a schematic structural diagram of another electronic device according to an embodiment of the invention.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the invention. The preferred embodiments in the following description are by way of example only and other obvious variations will occur to those skilled in the art. The basic principles of the invention defined in the following description may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
In the present invention, the terms "a" and "an" in the claims and specification should be understood as "one or more", i.e. in one embodiment the number of one element may be one, while in another embodiment the number of the element may be plural. The terms "a" and "an" are not to be construed as unique or singular, and the term "the" and "the" are not to be construed as limiting the amount of the element unless the amount of the element is specifically indicated as being only one in the disclosure of the present invention.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, unless explicitly stated or limited otherwise, the terms "connected," "connected," and "connected" should be interpreted broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; may be directly connected or indirectly connected through a medium. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Schematic method
Referring to fig. 1 to 3 of the drawings of the specification, a depth information processing method according to an embodiment of the present invention is illustrated. Specifically, as shown in fig. 1, the depth information processing method includes the steps of:
s110: acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of view field subareas according to a certain time sequence by a subarea TOF, and the view field subareas jointly form a complete target view field; and
S120: and processing the plurality of partition depth information in a splicing way according to the partition arrangement of the plurality of view field partitions so as to obtain complete depth information corresponding to the target view field.
Notably, since the entire target field of view is formed by a plurality of the field of view partitions, the area of each of the field of view partitions is smaller than the entire target field of view; meanwhile, the partitioned TOF detects different view field partitions according to a certain time sequence, so that the partitioned TOF detects different view field partitions at different time, and therefore, compared with the existing conventional TOF detecting mode for detecting the whole target view field at the same time, the partitioned TOF can detect the whole target view field only by lower power consumption, has obvious application potential in the consumer electronics field, can be used for a smart phone, acquires real three-dimensional information in an external long-distance range, realizes multiple AR-level application, and creates a new selling point; but also in VR/AR to meet the ever-increasing demands for motion capture and recognition. In addition, besides the consumer electronics field, the partitioned TOF can also support various functions, including gesture sensing or proximity detection of various innovative user interfaces, such as in the fields of computers, home appliances and industrial automation, service robots, unmanned aerial vehicles, the internet of things, and the like, and has a wide application prospect.
In other words, when the target field of view is divided into N field of view partitions of equal area, compared to the existing conventional TOF detecting the entire target field of view at the same time, the partitioned TOF employed in the present invention requires only one-nth of the power consumption of the existing power consumption when the same distance detection is completed; and the detection distance of the partition TOF adopted by the invention can reach N times of the existing distance when the same power consumption is consumed, namely, the partition TOF adopted by the invention can realize remote detection under the condition of lower power consumption, thereby being beneficial to expanding the detection range of TOF.
More specifically, the light source unit of the segmented TOF of the present invention is divided into a plurality of light source segments arranged in a specific arrangement, wherein the light source segments are in one-to-one correspondence with the field of view segments, so that the light source segments are lighted up in a time sequence segment by segment, and the corresponding field of view segments are respectively lighted up by the light source segments of the segmented TOF. At the same time, the receiving unit of the segmented TOF receives the optical signals respectively reflected from the respective field of view segments to determine depth information (i.e. the segmented depth information) of the respective field of view segments by measuring the time of flight of the optical signals; finally, the depth information (namely, the complete depth information) of the whole target field of view is obtained by performing splicing processing on the plurality of subarea depth information, so that the subarea TOF can remotely detect the whole target field of view under the condition of low power consumption.
Preferably, adjacent light source partitions do not overlap each other, and adjacent field of view partitions do not overlap each other, so as to avoid the occurrence of a region of repeated detection, contributing to further reduction of power consumption. In this way, the depth information processing method of the present invention can easily splice a plurality of the subarea depth information into one complete information only by the specific arrangement of the subareas of the field of view without cutting any depth information. In particular, the adjacent view field subareas are overlapped edge to edge, so that the overlapping between the adjacent view field subareas is avoided, and the occurrence of gaps between the adjacent view field subareas can be avoided, so that undetectable areas exist, the whole target view field is comprehensively detected, and no dead angle is left.
More preferably, the light source unit of the segmented TOF is uniformly divided into n×m light source segments, so that the areas occupied by different light source segments are all the same; correspondingly, the areas occupied by the different view field partitions are the same, so that the depth information processing method can splice the multiple partition depth information acquired at different moments to integrate the complete depth information of the target view field. In other words, all the light source partitions are arranged in n rows and m columns, and the light source partitions have a rectangular shape, for example, in an example of the present invention, the light source unit may be uniformly divided into 2×2 rectangular light source partitions; of course, in other examples of the invention, the light source units may also be uniformly divided into rectangular light source partitions such as 4*4, 2*6 or 1×12, etc.
Illustratively, as shown in fig. 2 and 3, taking 2×2 light source partitions as an example, the light source unit 10 of the partition TOF includes four light source partitions 11, where each light source partition 11 may include a plurality of point light sources 111 distributed in an array. When the four light source partitions 11 are sequentially illuminated to emit light signals, the light signals emitted at different times will pass through different portions of the light homogenizing unit 20 of the direct ranging TOF to be homogenized, and then propagate to the corresponding field-of-view partition 31 in the target field of view 30 to be reflected back to the receiving unit (not shown in the figure) of the partitioned TOF; finally, the light signals reflected back from within different said field of view partitions 31 are received at different times via said receiving unit to determine the distance of the corresponding said field of view partition 31 by measuring the time of flight of the light signals, i.e. to obtain the corresponding said partition depth information.
In particular, the light source unit 10 of the segmented TOF may be implemented as, but is not limited to, a vertical-cavity surface-emitting laser (VCSEL). While the light homogenizing unit 20 of the segmented TOF may be implemented as, but is not limited to, a light homogenizing device such as a random regular micro lens array or a diffractive optical element, etc.
It is noted that, in another example of the present invention, the light source units of the segmented TOF may also be unevenly divided into n×m light source segments, so that the areas occupied by different light source segments are not necessarily the same; accordingly, the areas occupied by different field of view partitions are not necessarily the same, which is helpful for dividing different field of view partitions according to a specific detection scene.
In addition, in practical applications, for example, in the AR or VR application field, besides obtaining depth information of the target field of view, color information of the target field of view is required, so as to meet requirements of subsequent three-dimensional scene modeling or information identification. Accordingly, as shown in fig. 1, the depth information processing method of the present invention may further include the steps of:
s130: acquiring color information of the target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module; and
s140: and processing the complete depth information and the color information in a fusion way to obtain a three-dimensional color image of the target field of view.
Preferably, the conventional camera module may be implemented as an RGB camera to obtain an RGB image of the target field of view (i.e., color information of the target field of view) by fully photographing the target field of view through the RGB camera. In this way, the depth information processing method of the present invention can fuse the complete depth information to the color information based on the external parameters (i.e. the relative pose between the conventional camera module and the partitioned TOF, etc.) to obtain a three-dimensional color image of the target field of view.
It is noted that if the conventional camera module photographs the target field of view while the light source partition of the light source unit of the partition TOF emits a light signal, to obtain the corresponding color information. This can cause the conventional camera module to be interfered by the light signals emitted by the partitioned TOF, so that the obtained color information will have color difference, and further influence the subsequent three-dimensional scene restoration or the required effect of AR/VR and the like.
Therefore, in this embodiment of the present invention, the color information is preferably obtained by detecting the gap of the field of view partition at the partition TOF via the conventional camera module capturing the target field of view, so as to avoid interference of the conventional camera module by the partition TOF, thereby ensuring the accuracy of the color information. Of course, in other examples of the present invention, the color information may also be obtained by photographing the target field of view via the conventional camera module before or after the segmented TOF detects the target field of view, so as to also avoid interference of the conventional camera module by the segmented TOF.
Illustratively, the depth information processing method of the present invention further includes, before the step S120, the steps of:
And respectively carrying out distortion correction on the corresponding partition depth information to obtain corrected partition depth information, and then carrying out splicing processing on the corrected partition depth information to obtain overall depth information with higher quality.
It can be understood that corrected partition depth information obtained by distortion correction of the partition depth information by the depth information processing method of the present invention can completely fit to the field of view partition so as to perform accurate splicing processing subsequently. In addition, the distortion correction method adopted by the depth information processing method of the invention can be to obtain the distortion degrees of different partitions through system calibration, and then reversely correct the distortion degrees to realize the distortion correction process. In other words, in the above example of the present invention, the depth information processing method realizes corresponding distortion correction by a pure software manner, which is helpful to significantly improve uniformity and window efficiency of a target area.
It should be noted that although the depth information processing method according to the above-described first embodiment of the present invention is capable of performing fusion processing on the partitioned depth information obtained via the partitioned TOF and the color information obtained via the conventional camera module to obtain a three-dimensional color image of the target field of view; however, since the depth information processing method performs fusion processing on the depth information and the color information after the partition TOF obtains all the partition depth information first to splice the partition depth information into the complete depth information, the time consumed by the depth information processing method is longer, which is not beneficial to meeting the real-time requirements of current products such as AR/VR. Accordingly, in order to shorten the time taken to process depth information, as shown in fig. 4, the depth information processing method according to the second embodiment of the present invention may include the steps of:
S210: acquiring color information of a target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module;
s220: acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of view field subareas according to a certain time sequence by a subarea TOF, and the view field subareas jointly form the target view field; and
s230: and sequentially carrying out fusion processing on the plurality of partition depth information and the color information to obtain the three-dimensional color image of the target view field.
It should be noted that, since the partition depth information is obtained by detecting the view partition by the partition TOF according to a certain time sequence, so that different partition depth information is acquired at different times, after at least one partition depth information is acquired and before the last partition depth information is acquired, the acquired partition depth information and the color information can be fused, so that the overall use of the depth information processing method is reduced, and the processing efficiency of the depth information processing method is improved.
In other words, in the step S230, the corresponding segment depth information is fused to the color information in sequence according to the detection timing of the segment TOF for the field of view segment, so as to obtain the three-dimensional color image. That is, different pieces of the partition depth information are time-divisionally fused to the color information to obtain the three-dimensional color image, contributing to a reduction in overall time required for the depth information processing method.
Preferably, in the step S230, after each of the partition depth information is obtained by the partition TOF, the corresponding partition depth information is subjected to fusion processing with the color information to obtain the three-dimensional color image after one of the partition depth information finally obtained is fused to the color information.
Illustratively, after detecting a certain field of view by the partition TOF to obtain corresponding partition depth information, the depth information processing method directly performs fusion processing on the partition depth information and the color information to obtain a local three-dimensional color image (i.e., color information with partial depth information); at the same time, the next field of view is segmented by the segmented TOF to obtain the next segmented depth information. Then, continuing to fuse the next subarea depth information with the local three-dimensional color image so as to obtain a local three-dimensional color image; and the method is analogically performed until the last market partition is detected by the partition TOF to obtain the last partition depth information, and the last partition depth information and the local three-dimensional color image are fused to obtain the three-dimensional color image. It can be understood that, since the depth information processing method according to this embodiment of the present invention performs the acquisition and fusion of the partition depth information simultaneously, the time required for the depth information processing method according to the present invention is greatly shortened, so that the data processing efficiency is significantly improved.
Schematic System
Referring to fig. 5 of the drawings of the specification, a depth information processing system for processing partition depth information obtained by partition detection of an entire target field of view via partition TOF according to an embodiment of the present invention is illustrated. Specifically, as shown in fig. 5, the depth information processing system 400 includes a depth information acquiring module 410 and a splicing processing module 420 that are communicatively connected to each other. The depth information obtaining module 410 is configured to obtain a plurality of partition depth information, where the plurality of partition depth information is obtained by detecting a plurality of field of view partitions respectively according to a time sequence by partition TOF, and the plurality of field of view partitions together form a complete target field of view. The splicing processing module 420 is configured to splice the plurality of partition depth information according to the partition arrangement of the plurality of view field partitions, so as to obtain complete depth information corresponding to the target view field.
It should be noted that, in the above embodiment of the present invention, as shown in fig. 5, the depth information processing system 400 may further include a color information acquisition module 430 and a synchronous fusion processing module 440 that are communicatively connected to each other, wherein the color information acquisition module 430 is configured to acquire color information of the target field of view, where the color information is obtained by capturing the target field of view by a conventional camera module; wherein the simultaneous fusion processing module 440 is configured to fusion process the complete depth information and the color information to obtain a three-dimensional color image of the target field of view.
In addition, in an example of the present invention, as shown in fig. 5, the depth information processing system 400 may further include a distortion correction module 450, configured to correct the distortion of the corresponding partition depth information, so as to obtain corrected partition depth information.
According to another aspect of the present invention, as shown in fig. 6, another embodiment of the present invention also provides a depth information processing system 500, wherein the depth information processing system 500 includes a color information acquisition module 510, a depth information acquisition module 520, and a time-sharing fusion processing module 530, which are communicatively connected to each other, wherein the color information acquisition module 510 is configured to acquire color information of a target field of view, wherein the color information is obtained by capturing the target field of view through a conventional camera module; wherein the depth information acquisition module 520 is configured to acquire a plurality of partition depth information, wherein the plurality of partition depth information is obtained by detecting a plurality of field of view partitions respectively according to a timing sequence by partition TOF, and the plurality of field of view partitions collectively form the target field of view; the time-sharing fusion processing module 530 is configured to sequentially fuse the plurality of partition depth information with the color information, so as to obtain a three-dimensional color image of the target field of view.
Schematic electronic device
Next, an electronic device according to an embodiment of the present invention is described with reference to fig. 7. As shown in fig. 7, the electronic device 90 includes one or more processors 91 and memory 92.
The processor 91 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 90 to perform desired functions. In other words, the processor 91 comprises one or more physical devices configured to execute instructions. For example, the processor 91 may be configured to execute instructions that are part of: one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, implement a technical effect, or otherwise achieve a desired result.
The processor 91 may include one or more processors configured to execute software instructions. Additionally or alternatively, the processor 91 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the processor 91 may be single-core or multi-core, and the instructions executed thereon may be configured for serial, parallel, and/or distributed processing. The various components of the processor 91 may optionally be distributed across two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the processor 91 may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
The memory 92 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to perform some or all of the steps in the above-described exemplary methods of the present invention, and/or other desired functions.
In other words, the memory 92 includes one or more physical devices configured to hold machine readable instructions executable by the processor 91 to implement the methods and processes described herein. In implementing these methods and processes, the state of the memory 92 may be transformed (e.g., different data is saved). The memory 92 may include removable and/or built-in devices. The memory 92 may include optical memory (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. The memory 92 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location-addressable, file-addressable, and/or content-addressable devices.
It is to be appreciated that the memory 92 includes one or more physical devices. However, aspects of the instructions described herein may alternatively be propagated through a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a limited period of time. Aspects of the processor 91 and the memory 92 may be integrated together into one or more hardware logic components. These hardware logic components may include, for example, field Programmable Gate Arrays (FPGAs), program and application specific integrated circuits (PASICs/ASICs), program and application specific standard products (PSSPs/ASSPs), system on a chip (SOCs), and Complex Programmable Logic Devices (CPLDs).
In one example, as shown in FIG. 7, the electronic device 90 may further include an input device 93 and an output device 94, which are interconnected by a bus system and/or other form of connection mechanism (not shown). For example, the input device 93 may be, for example, a camera module or the like for capturing image data or video data. As another example, the input device 93 may include or interface with one or more user input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input device 93 may include or interface with selected Natural User Input (NUI) components. Such component parts may be integrated or peripheral and the transduction and/or processing of the input actions may be processed on-board or off-board. Example NUI components may include microphones for speech and/or speech recognition; infrared, color, stereoscopic display, and/or depth cameras for machine vision and/or gesture recognition; head trackers, eye trackers, accelerometers and/or gyroscopes for motion detection and/or intent recognition; and an electric field sensing component for assessing brain activity and/or body movement; and/or any other suitable sensor.
The output device 94 may output various information including the classification result and the like to the outside. The output device 94 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, the electronic device 90 may further comprise the communication means, wherein the communication means may be configured to communicatively couple the electronic device 90 with one or more other computer devices. The communication means may comprise wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network or a wired or wireless local area network or wide area network. In some embodiments, the communications apparatus may allow the electronic device 90 to send and/or receive messages to and/or from other devices via a network such as the Internet.
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Also, the order of the above-described processes may be changed.
Of course, only some of the components of the electronic device 90 that are relevant to the present invention are shown in fig. 7 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 90 may include any other suitable components depending on the particular application.
According to another aspect of the present invention, an embodiment of the present invention further provides another electronic device. Illustratively, as shown in fig. 8, the electronic device includes an electronic device body 800, at least one partitioned TOF700, and at least one depth information processing system 400 described above, wherein the partitioned TOF700 is configured to the electronic device body 800 for partitioned detection of a target field of view by the partitioned TOF 700; wherein the depth information processing system 400 is configured to the electronic device body 800, and the depth information processing system 400 includes communicably connected to each other: a depth information obtaining module, configured to obtain a plurality of partition depth information, where the plurality of partition depth information is obtained by detecting a plurality of field of view partitions by the partition TOF according to a certain timing sequence, and the plurality of field of view partitions together form a complete target field of view; and the splicing processing module is used for processing the depth information of the multiple subareas in a splicing manner according to the subarea arrangement of the subareas of the multiple view fields so as to obtain complete depth information corresponding to the target view field. It is appreciated that in other examples of the invention, the direct ranging TOF segmented detection system 400 can also be configured directly to the direct ranging TOF700, and the electronic device body 800 can be implemented as a companion device to the direct ranging TOF700, that is, the electronic device can be implemented directly as a TOF product with segmented detection capabilities.
In this example of the present invention, as shown in fig. 8, the electronic device further includes a conventional camera module 600, wherein the conventional camera module 600 is configured to the electronic device body 800 for photographing the target field of view through the conventional camera module 600; wherein the depth information processing system 400 further comprises a color information acquisition module and a synchronous fusion processing module which are communicably connected with each other, wherein the color information acquisition module is used for acquiring color information of the target field of view, wherein the color information is obtained by shooting the target field of view through the conventional camera module; the simultaneous fusion processing module is used for processing the complete depth information and the color information in a fusion mode to obtain a three-dimensional color image of the target view field.
It should be noted that, in other examples of the present invention, as shown in fig. 8, the electronic device may also include the above-mentioned depth information processing system 500, where the depth information processing system 500 is configured in the electronic device body 800, and the depth information processing system 500 includes the components communicatively connected to each other: the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is acquired by shooting the target view field through a conventional camera module; a depth information acquisition module for acquiring a plurality of partition depth information, wherein the plurality of partition depth information is obtained by detecting a plurality of field of view partitions respectively by partition TOF according to a certain time sequence, and the plurality of field of view partitions jointly form the target field of view; and a time-sharing fusion processing module, which is used for sequentially carrying out fusion processing on the plurality of subarea depth information and the color information so as to obtain the three-dimensional color image of the target view field.
Notably, the electronic device body 800 may be any device or system capable of being configured with the segmented TOF700 and the direct ranging TOF segmented detection system 400, such as glasses, a head mounted display device, an augmented reality device, a virtual reality device, a smart phone, or a mixed reality device. It will be appreciated by those skilled in the art that although the electronic device body 800 is illustrated in fig. 8 as being implemented as AR glasses, it is not intended to limit the scope and content of the present invention.
It is also noted that in the apparatus, devices and methods of the present invention, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent aspects of the present invention.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are by way of example only and are not limiting. The objects of the present invention have been fully and effectively achieved. The functional and structural principles of the present invention have been shown and described in the examples and embodiments of the invention may be modified or practiced without departing from the principles described.

Claims (5)

1. A depth information processing method for an AR/VR device to obtain fused depth information and color information, the method comprising the steps of:
acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of view field subareas according to a certain time sequence through a direct ranging subarea TOF installed on AR/VR equipment so as to obtain different subarea depth information at different time intervals, and the plurality of view field subareas jointly form a complete target view field, wherein a light source unit of the direct ranging subarea TOF is divided into a plurality of light source subareas which are specially arranged, the light source subareas are in one-to-one correspondence with the view field subareas, the light source subareas are lightened in subareas according to a certain time sequence so as to illuminate the corresponding view field subareas, adjacent light source subareas are not overlapped with each other, and the adjacent view field subareas are overlapped with each other in an edge-to-edge mode so as to avoid the occurrence of repeated detection areas, and the light signals respectively reflected from the view field subareas are received through the same receiving unit;
The distortion degree of different subareas is obtained through system calibration, and the corresponding subarea depth information is subjected to distortion correction in a reverse correction mode to obtain corrected subarea depth information, wherein the corrected subarea depth information obtained after distortion correction can be in fit correspondence with the visual field subarea;
according to the partition arrangement of the view field partitions, the partition depth information is processed in a splicing manner to obtain complete depth information corresponding to the target view field, wherein the depth information processing method can splice the partition depth information into complete information only according to the specific arrangement of the view field partitions without cutting any depth information;
acquiring color information of the target field of view, wherein the color information is obtained by shooting the target field of view through a conventional camera module, and the color information is obtained by shooting the target field of view through the conventional camera module when detecting gaps of the field of view in the partitioned TOF; and
and in the step of fusing the plurality of the subarea depth information with the color information to obtain the three-dimensional color image of the target view field, after each subarea depth information is obtained through the direct ranging subarea TOF, the corresponding subarea depth information is fused with the color information to obtain the three-dimensional color image after the finally obtained subarea depth information is fused with the color information.
2. The depth information processing method of claim 1, wherein the conventional camera module is an RGB camera.
3. The depth information processing method according to claim 1, wherein, in the step of processing the complete depth information and the color information in fusion to obtain a three-dimensional color image of the target field of view:
based on the external parameters between the conventional camera module and the direct ranging partition TOF, the complete depth information is fused to the color information to obtain the three-dimensional color image.
4. Depth information processing system, characterized in that it is applied to AR/VR devices and comprises, communicatively connected to each other:
the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is acquired by shooting the target view field through a conventional camera module;
a depth information acquisition module for acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by respectively detecting a plurality of view field subareas according to a certain time sequence through a direct ranging subarea TOF, the view field subareas jointly form the target view field, and the view field subareas jointly form a complete target view field, wherein a light source unit of the direct ranging subarea TOF is divided into a plurality of light source subareas which are specially arranged, the light source subareas are in one-to-one correspondence with the view field subareas, the light source subareas are lightened in subareas according to a certain time sequence so as to illuminate the corresponding view field subareas, adjacent light source subareas are not overlapped with each other, and the adjacent view field subareas are overlapped with each other in a side-to-side manner so as to avoid the occurrence of repeated detection areas, and the light signals respectively reflected from the view field subareas are received through the same receiving unit;
The distortion correction module is used for obtaining distortion degrees of different subareas through system calibration, and respectively carrying out distortion correction on the corresponding subarea depth information in a reverse correction mode to obtain corrected subarea depth information, wherein the corrected subarea depth information obtained after distortion correction can be in fit with the view field subarea; and
and a time-sharing fusion processing module, configured to sequentially fuse the plurality of partition depth information with the color information to obtain a three-dimensional color image of the target field of view, where the color information is obtained by capturing the target field of view through the conventional camera module when the partition TOF detects gaps between the partitions of the field of view, and immediately after each partition depth information is obtained by the direct ranging partition TOF, fuse the corresponding partition depth information with the color information, so as to obtain the three-dimensional color image after one finally obtained partition depth information is fused to the color information.
5. An electronic device, characterized in that it is an AR/VR device and comprises:
an electronic device body;
At least one conventional camera module, wherein the conventional camera module is configured on the electronic equipment body and is used for shooting a target field of view through the conventional camera module;
at least a direct ranging partition TOF, wherein the direct ranging partition TOF is configured to the electronic device body for detecting the target field of view by the direct ranging partition TOF; and
at least one depth information processing system, wherein the depth information processing system is configured to the electronic device body or the direct ranging partition TOF, and the depth information processing system comprises communicatively connected to each other:
the color information acquisition module is used for acquiring color information of a target view field, wherein the color information is acquired by shooting the target view field through a conventional camera module;
a depth information acquisition module for acquiring a plurality of subarea depth information, wherein the subarea depth information is obtained by detecting a plurality of view field subareas respectively according to a certain time sequence through the direct ranging subarea TOF, the view field subareas jointly form the target view field, and the view field subareas jointly form a complete target view field, wherein a light source unit of the direct ranging subarea TOF is divided into a plurality of light source subareas which are specially arranged, the light source subareas are in one-to-one correspondence with the view field subareas, the light source subareas are lightened in subareas according to a certain time sequence so as to illuminate the corresponding view field subareas, adjacent light source subareas are not overlapped with each other, and the adjacent view field subareas are overlapped with each other in a side-to-side manner so as to avoid the occurrence of repeated detection areas, and the light signals respectively reflected back from the view field subareas are received through the same receiving unit;
The distortion correction module is used for obtaining distortion degrees of different subareas through system calibration, and respectively carrying out distortion correction on the corresponding subarea depth information in a reverse correction mode to obtain corrected subarea depth information, wherein the corrected subarea depth information obtained after distortion correction can be in fit with the view field subarea; and
and a time-sharing fusion processing module, configured to sequentially fuse the plurality of partition depth information with the color information to obtain a three-dimensional color image of the target field of view, where the color information is obtained by capturing the target field of view through the conventional camera module when the partition TOF detects gaps between the partitions of the field of view, and immediately after each partition depth information is obtained by the direct ranging partition TOF, fuse the corresponding partition depth information with the color information, so as to obtain the three-dimensional color image after one finally obtained partition depth information is fused to the color information.
CN202010425515.1A 2020-02-01 2020-05-19 Depth information processing method and system and electronic equipment Active CN111458692B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010077893 2020-02-01
CN2020100778935 2020-02-01

Publications (2)

Publication Number Publication Date
CN111458692A CN111458692A (en) 2020-07-28
CN111458692B true CN111458692B (en) 2023-08-25

Family

ID=71209629

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202010425531.0A Pending CN111458693A (en) 2020-02-01 2020-05-19 Direct ranging TOF (time of flight) partitioned detection method and system and electronic equipment thereof
CN202010425515.1A Active CN111458692B (en) 2020-02-01 2020-05-19 Depth information processing method and system and electronic equipment
CN202010426643.8A Pending CN111366906A (en) 2020-02-01 2020-05-19 Projection apparatus and segmented TOF apparatus, manufacturing method thereof, and electronic apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010425531.0A Pending CN111458693A (en) 2020-02-01 2020-05-19 Direct ranging TOF (time of flight) partitioned detection method and system and electronic equipment thereof

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010426643.8A Pending CN111366906A (en) 2020-02-01 2020-05-19 Projection apparatus and segmented TOF apparatus, manufacturing method thereof, and electronic apparatus

Country Status (1)

Country Link
CN (3) CN111458693A (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021238678A1 (en) * 2020-05-27 2021-12-02 杭州驭光光电科技有限公司 Diffractive optical element, partitioned uniform light projection system, electronic device and design method
CN112748583B (en) * 2020-08-11 2022-05-13 上海鲲游光电科技有限公司 Optical field modulator and modulation method thereof
CN111929703B (en) * 2020-09-14 2021-05-21 上海鲲游光电科技有限公司 Optical processing assembly, ToF emitting device and ToF depth information detector
WO2022111501A1 (en) * 2020-11-27 2022-06-02 宁波飞芯电子科技有限公司 Distance information acquisition system
CN114615397B (en) * 2020-12-09 2023-06-30 华为技术有限公司 TOF device and electronic equipment
CN112965073A (en) * 2021-02-05 2021-06-15 上海鲲游科技有限公司 Partition projection device and light source unit and application thereof
CN112946604A (en) * 2021-02-05 2021-06-11 上海鲲游科技有限公司 dTOF-based detection device and electronic device and application thereof
US20220412729A1 (en) * 2021-06-25 2022-12-29 Himax Technologies Limited Dot pattern projector for use in three-dimensional distance measurement system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299261A (en) * 2014-09-10 2015-01-21 深圳大学 Three-dimensional imaging method and system for human body
CN105005089A (en) * 2015-06-08 2015-10-28 上海交通大学 Airport foreign object debris detection system and method based on computer vision
WO2016056317A1 (en) * 2014-10-08 2016-04-14 ソニー株式会社 Information processor and information-processing method
CN106447677A (en) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 Image processing method and apparatus thereof
CN106574964A (en) * 2014-12-22 2017-04-19 谷歌公司 Integrated camera system having two dimensional image capture and three dimensional time-of-flight capture with a partitioned field of view
CN106612387A (en) * 2015-10-15 2017-05-03 杭州海康威视数字技术股份有限公司 Combined depth map acquisition method and depth camera
CN109427046A (en) * 2017-08-30 2019-03-05 深圳中科飞测科技有限公司 Distortion correction method, device and the computer readable storage medium of three-dimensional measurement
CN110300292A (en) * 2018-03-22 2019-10-01 深圳光峰科技股份有限公司 Projection distortion bearing calibration, device, system and storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2932562B1 (en) * 2008-06-12 2010-08-27 Univ Pasteur LIGHT PROJECTION DEVICE STRUCTURED BY MEANS OF VCSEL AND PHASE DIFFRACTIVE OPTICAL COMPONENTS.
US8860930B2 (en) * 2012-06-02 2014-10-14 Richard Kirby Three dimensional surface mapping system using optical flow
US9635231B2 (en) * 2014-12-22 2017-04-25 Google Inc. Time-of-flight camera system and method to improve measurement quality of weak field-of-view signal regions
US9674415B2 (en) * 2014-12-22 2017-06-06 Google Inc. Time-of-flight camera system with scanning illuminator
US9946089B2 (en) * 2015-10-21 2018-04-17 Princeton Optronics, Inc. Generation of coded structured light patterns using VCSEL arrays
CN107424188B (en) * 2017-05-19 2020-06-30 深圳奥比中光科技有限公司 Structured light projection module based on VCSEL array light source
US11126060B2 (en) * 2017-10-02 2021-09-21 Liqxtal Technology Inc. Tunable light projector
CN208110250U (en) * 2018-04-16 2018-11-16 深圳奥比中光科技有限公司 Pattern projector and depth camera
CN109086694B (en) * 2018-07-17 2024-01-19 北京量子光影科技有限公司 Face recognition system and method
CN109343070A (en) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 Time flight depth camera
CN109917352A (en) * 2019-04-19 2019-06-21 上海禾赛光电科技有限公司 The design method of laser radar and its emission system, the emission system of laser radar
CN110275381B (en) * 2019-06-26 2021-09-21 业成科技(成都)有限公司 Structural light emission module and depth sensing equipment using same
CN110609293B (en) * 2019-09-19 2022-05-27 深圳奥锐达科技有限公司 Distance detection system and method based on flight time

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299261A (en) * 2014-09-10 2015-01-21 深圳大学 Three-dimensional imaging method and system for human body
WO2016056317A1 (en) * 2014-10-08 2016-04-14 ソニー株式会社 Information processor and information-processing method
CN106574964A (en) * 2014-12-22 2017-04-19 谷歌公司 Integrated camera system having two dimensional image capture and three dimensional time-of-flight capture with a partitioned field of view
CN105005089A (en) * 2015-06-08 2015-10-28 上海交通大学 Airport foreign object debris detection system and method based on computer vision
CN106612387A (en) * 2015-10-15 2017-05-03 杭州海康威视数字技术股份有限公司 Combined depth map acquisition method and depth camera
CN106447677A (en) * 2016-10-12 2017-02-22 广州视源电子科技股份有限公司 Image processing method and apparatus thereof
CN109427046A (en) * 2017-08-30 2019-03-05 深圳中科飞测科技有限公司 Distortion correction method, device and the computer readable storage medium of three-dimensional measurement
CN110300292A (en) * 2018-03-22 2019-10-01 深圳光峰科技股份有限公司 Projection distortion bearing calibration, device, system and storage medium

Also Published As

Publication number Publication date
CN111458693A (en) 2020-07-28
CN111458692A (en) 2020-07-28
CN111366906A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111458692B (en) Depth information processing method and system and electronic equipment
US11328446B2 (en) Combining light-field data with active depth data for depth map generation
EP3673460B1 (en) Depth map with structured and flood light
US10755425B2 (en) Automatic tuning of image signal processors using reference images in image processing environments
US9846960B2 (en) Automated camera array calibration
US20190068853A1 (en) Structured light and flood fill light illuminator
EP3285484A1 (en) Image processing apparatus, image generation method, and program
CN108646917B (en) Intelligent device control method and device, electronic device and medium
US11501123B2 (en) Method and apparatus for asynchronous data fusion, storage medium and electronic device
US20190325600A1 (en) Determining a pose of a handheld object
US9842400B2 (en) Method and apparatus for determining disparity
CN105491359B (en) Projection device, optical projection system and projecting method
CN113009508B (en) Multipath interference correction method for TOF module, system and electronic equipment thereof
US10019837B2 (en) Visualization alignment for three-dimensional scanning
WO2020019682A1 (en) Laser projection module, depth acquisition apparatus and electronic device
EP3903284A1 (en) Low-power surface reconstruction
Beňo et al. 3d map reconstruction with sensor kinect: Searching for solution applicable to small mobile robots
CN112954153B (en) Camera device, electronic equipment, depth of field detection method and depth of field detection device
JP6740614B2 (en) Object detection device and image display device including the object detection device
CN112987022A (en) Distance measurement method and device, computer readable medium and electronic equipment
US20150348323A1 (en) Augmenting a digital image with distance data derived based on actuation of at least one laser
CN110716642A (en) Method and equipment for adjusting display interface
CN105320359B (en) Electric whiteboard system, light spot position identifying processing method and apparatus
US11917273B2 (en) Image generating device and method thereof
CN109729250B (en) Electronic equipment and mobile platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230727

Address after: 200120 Shanghai Pudong New Area China (Shanghai) Free Trade Pilot Zone Lingang New Area, No. 2699 Jiangshan Road, Building 4, West Area

Applicant after: Shanghai kunyou Technology Co.,Ltd.

Address before: 201203 Room 201, 518 Bibo Road, Pudong New Area Free Trade Zone, Shanghai

Applicant before: SHANGHAI NORTH OCEAN PHOTONICS Co.,Ltd.

GR01 Patent grant
GR01 Patent grant