CN115670392B - Three-dimensional scanning device for acquiring facial expression of scanned object - Google Patents

Three-dimensional scanning device for acquiring facial expression of scanned object Download PDF

Info

Publication number
CN115670392B
CN115670392B CN202310005920.1A CN202310005920A CN115670392B CN 115670392 B CN115670392 B CN 115670392B CN 202310005920 A CN202310005920 A CN 202310005920A CN 115670392 B CN115670392 B CN 115670392B
Authority
CN
China
Prior art keywords
phase
gray code
projector
dimensional
scanned object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310005920.1A
Other languages
Chinese (zh)
Other versions
CN115670392A (en
Inventor
涂颜帅
雷娜
陈伟
吴伯阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhituo Vision Technology Co ltd
Dalian University of Technology
Original Assignee
Beijing Zhituo Vision Technology Co ltd
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhituo Vision Technology Co ltd, Dalian University of Technology filed Critical Beijing Zhituo Vision Technology Co ltd
Priority to CN202310005920.1A priority Critical patent/CN115670392B/en
Publication of CN115670392A publication Critical patent/CN115670392A/en
Application granted granted Critical
Publication of CN115670392B publication Critical patent/CN115670392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application discloses a three-dimensional scanning device for acquiring facial expressions of a scanned object. The three-dimensional scanning device includes: a projector for projecting a gray code encoding pattern and at least one phase-shifted image to a facial expression of a scanned object; the binocular camera is used for synchronously shooting the corresponding Gray code coding pattern and the phase shift image when the projector projects the Gray code coding pattern and the phase shift image to the facial expression of the scanned object; an on-chip processor communicatively coupled with the projector and the binocular camera and configured to: performing three-dimensional matching according to the Gray code coding pattern and one phase shift image; and determining three-dimensional coordinate information based on the three-dimensional matching result so as to realize acquisition of the facial expression of the scanned object. By using the scheme of the application, the scanning precision and the operation speed can be improved, and the requirement of the scanned object on the change of the facial expression can be met.

Description

Three-dimensional scanning device for acquiring facial expression of scanned object
Technical Field
The present application relates generally to the field of image processing technology. More particularly, the present application relates to a three-dimensional scanning apparatus and a three-dimensional scanning method for acquiring facial expressions of a scanned object, and a head-mounted three-dimensional scanning apparatus.
Background
With the advance of deep learning technology, the research of face related tasks also rises as a hotspot in the academic world and the industrial world. The well-known human face tasks generally comprise human face detection, human face identity recognition, human face expression recognition and the like, the two-dimensional RGB human face is mostly adopted as input, and the appearance and the development of a three-dimensional scanning imaging technology enable human face related tasks to have a new exploration route.
At present, character expressions in movie and television animations are mainly manually made frame by producers, and the making of each animation expression consumes much time, which wastes much labor cost and is difficult to be consistent with the change of real expressions. The expression information of the face is dynamically captured through the three-dimensional scanner, and then the character expression is automatically transferred to the face of the cartoon character, so that the time cost and the labor cost of 70% in the movie and animation industry can be reduced, and the scanned face expression is real and natural. The technology can also be applied to the requirements of collecting and transferring human face expressions such as part of lenses for rephotography, face changing and the like when the actors play, and provides a brand-new solution for movie and animation and game industries. However, the existing three-dimensional scanner usually needs to acquire at least three phase-shifted images, resulting in slower scanning speed and operation speed, and failing to meet the requirement of the change of facial expression of the scanned object.
In view of the above, it is desirable to provide a three-dimensional scanning device for acquiring facial expressions of a scanned object, so as to improve scanning accuracy and operation speed and meet the requirement of changing facial expressions of the scanned object.
Disclosure of Invention
To address at least one or more of the above-mentioned technical problems, the present application proposes, in various aspects, a scheme for acquiring facial expressions of a scanned subject.
In a first aspect, the present application provides a three-dimensional scanning apparatus for acquiring facial expressions of a scanned object, comprising: a projector for projecting a gray code encoding pattern and at least one phase-shifted image to a facial expression of a scanned object; the binocular camera is used for synchronously shooting the corresponding Gray code coding pattern and the phase shift image when the projector projects the Gray code coding pattern and the phase shift image to the facial expression of the scanned object; an on-chip processor communicatively coupled to the projector and the binocular camera and configured to: performing three-dimensional matching according to the Gray code coding pattern and one phase-shift image; and determining three-dimensional coordinate information based on the three-dimensional matching result so as to realize acquisition of the facial expression of the scanned object.
In one embodiment, wherein in three-dimensional matching according to the gray code encoding pattern and one of the phase shifted images, the on-chip processor is further configured to: determining an absolute phase associated with three-dimensional matching based on the gray code encoding pattern and one of the phase shifted images; and performing three-dimensional matching based on the absolute phase.
In another embodiment, wherein in determining the absolute phase associated with three-dimensional matching based on the gray code encoding pattern and one of the phase shifted images, the on-chip processor is further configured to: calculating a gray code value according to the gray code coding pattern and determining a phase unwrapping order; calculating a truncation phase based on the gray code value and one of the phase-shifted images; and determining an absolute phase associated with the three-dimensional matching based on the phase unwrapping order and the truncated phase.
In yet another embodiment, wherein in calculating a truncation phase based on the gray-coded value and one of the phase-shifted images, the on-chip processor is further configured to: determining a corresponding phase shift model according to one phase shift image, wherein the phase shift model at least comprises constant parameters; calculating the constant parameter based on the gray coded value; and calculating the truncated phase based on the constant parameter and the phase shift model.
In yet another embodiment, wherein the projector comprises at least a turning mirror and a line laser, the on-chip processor is further configured to: controlling the rotating mirror to rotate according to a preset angle and generating a trigger signal; and controlling the line laser to be switched on or off to generate light and dark stripe projection in response to the trigger signal so as to control the projector to project a Gray code coding pattern and at least one phase-shift image to the facial expression of the scanned object.
In yet another embodiment, wherein the projector further comprises an angle sensor for feeding back a rotation angle of the rotating mirror to obtain a feedback result, the on-chip processor is further for: constructing a rotating model related to the rotating mirror according to the feedback result; and optimally controlling the projector based on the rotation model.
In yet another embodiment, wherein the rotation model is associated with the rotation angle, the driving current of the projector and a damping, in optimally controlling the projector based on the rotation model, the on-chip processor is further configured to: calculating a conversion coefficient, a damping coefficient and delay time of the driving current and the rotation angle; and controlling the driving current and the line laser to be turned on or off according to the conversion coefficient, the damping coefficient, the delay time and the rotation angle so as to optimally control the projector.
In another embodiment, the three-dimensional scanning apparatus further comprises a network module, and the on-chip processor is further configured to transmit data to a terminal via the network module.
In a second aspect, the present application further provides a three-dimensional scanning method for acquiring facial expressions of a scanned object, comprising: projecting a gray code encoding pattern and at least one phase-shifted image to a facial expression of a scanned object using a projector; using a binocular camera to synchronously shoot corresponding Gray code coding patterns and phase shift images when the projector projects the Gray code coding patterns and the phase shift images to the facial expressions of the scanned objects; performing three-dimensional matching according to the Gray code coding pattern and one phase-shift image by using an on-chip processor; and determining three-dimensional coordinate information based on the three-dimensional matching result to realize acquisition of the facial expression of the scanned object.
In a third aspect, the present application further provides a head-mounted three-dimensional scanning device, comprising: a helmet; and a three-dimensional scanning apparatus according to embodiments of the aforementioned first aspect.
With the three-dimensional scanning device for acquiring the facial expression of the scanned object, the projector is arranged to project the gray code coding pattern and the at least one phase-shifted image to the facial expression of the scanned object, and the binocular camera is used to synchronously shoot the projected gray code coding pattern and the one phase-shifted image, so as to perform three-dimensional matching based on the gray code coding pattern and the one phase-shifted image to determine three-dimensional coordinate information, so as to acquire the facial expression of the scanned object. Based on the method, the projector projects a specific pattern, and three-dimensional matching is performed by combining a single phase-shifted image, so that the scanning speed of the three-dimensional scanning device and the calculation speed for measuring and calculating three-dimensional coordinate information can be improved, and the requirement of the change of the facial expression of a scanned object is met.
Further, the three-dimensional matching is carried out through the gray code coding pattern and the absolute phase determined by one phase-shifted image, and the absolute phase is monotonically increased, so that the three-dimensional matching based on the absolute phase as the matching amount is not ambiguous. That is, the matching point of one pixel is unique and accurate, so that the three-dimensional point cloud reconstruction can be accurately realized, the precision of three-dimensional scanning is improved, and in addition, the embodiment of the application, under the condition of only adopting one phase shift image, any phase precision can not be lost. Still further, the projector of the embodiments of the present application includes a turning mirror and a line laser to realize projecting gray code encoding patterns. In some embodiments, in order to accurately control the rotating mirror, the embodiment of the present application is further provided with an angle sensor to feed back the position of the rotating mirror in real time. In addition, the embodiment of the application also provides a head-mounted three-dimensional scanning device, so as to collect the facial expression change of the scanned object in the motion state.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present application will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present application are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
fig. 1 is a block diagram illustrating an exemplary structure of a three-dimensional scanning apparatus for acquiring facial expressions of a scanned object according to an embodiment of the present application;
FIG. 2 is an exemplary diagram illustrating a three-dimensional scanning device for acquiring facial expressions of a scanned object in accordance with an embodiment of the present application;
FIG. 3 is an exemplary diagram illustrating a projector projecting Gray code encoding patterns according to embodiments of the present application;
fig. 4 is a block diagram illustrating an exemplary structure of the whole three-dimensional scanning apparatus for acquiring facial expressions of a scanned object according to an embodiment of the present application;
FIG. 5 is an exemplary flow diagram illustrating a three-dimensional scanning method for acquiring facial expressions of a scanned object according to an embodiment of the application; and
fig. 6 is a block diagram illustrating an exemplary structure of an apparatus for acquiring facial expressions of a scanned object according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings. It should be understood that the embodiments described herein are only some of the embodiments provided herein to facilitate a clear understanding of the concepts and legal requirements, and that not all embodiments of the application may be implemented. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed in the present specification without any inventive step, shall fall within the scope of protection of the present application.
Fig. 1 is a block diagram illustrating an exemplary structure of a three-dimensional scanning apparatus 100 for acquiring facial expressions of a scanned object according to an embodiment of the present application. As shown in fig. 1, the three-dimensional scanning apparatus 100 may include a projector 110, a binocular camera 120, and an on-chip processor 130.
In one embodiment, the projector 110 described above may be used to project a gray code encoding pattern and at least one phase-shifted image onto a facial expression of a scanned object. In one implementation scenario, the projector 110 may be, for example, a Micro-Electro-Mechanical systems ("MEMS") structured light projector to project a particular pattern to uniquely encode the scan space. Compared with a traditional structured light projector (such as a projector), the structured light projector adopting the MEMS has the advantages of small volume, high frame rate and the like.
In some embodiments, the projector 110 may include at least a turning mirror and a line laser. In a practical application scenario, a rotating mirror may be disposed on the surface of the projector 110 and at an angle to the horizontal direction (for example, as shown in fig. 2), the line laser may be disposed toward the rotating mirror through a fixed bracket, and the projection of the bright and dark stripes is achieved through the synchronous rotation of the switch of the line laser and the rotating mirror, so as to achieve the projection of the gray code encoding pattern. Specifically, the rotating mirror may be controlled by the on-chip processor 130 to rotate according to a predetermined angle and generate a trigger signal, and in response to the trigger signal, the line laser may be controlled by the on-chip processor 130 to turn on or off to generate a bright and dark fringe projection, thereby causing the projector to project a gray code encoding pattern and at least one phase-shifted image onto a facial expression of the scanned object. The aforementioned projector will be described in detail later in conjunction with fig. 3.
In one embodiment, the binocular camera 120 may be used to synchronously capture corresponding gray code encoding patterns and phase shifted images when the projector projects the gray code encoding patterns and phase shifted images onto the facial expressions of the scanned object. Based on the three-dimensional coordinate information, the three-dimensional coordinate information can be solved through binocular stereo matching, namely, the depth information is calculated through pixel parallax of the two cameras. In particular, the present application synchronously captures a phase shifted image while a projector projects a gray code encoding pattern onto a facial expression of a scanned object.
In one embodiment, the on-chip processor 103 may be configured to perform three-dimensional matching according to the gray code encoding pattern and a phase-shifted image, so as to determine three-dimensional coordinate information based on the three-dimensional matching result, and thus, to acquire the facial expression of the scanned object. In one embodiment, the aforementioned on-chip processor 103 may be, for example, a Field Programmable Gate Array ("FPGA") based processor. Wherein, in performing three-dimensional matching according to the gray code encoding pattern and one phase-shifted image to determine three-dimensional coordinate information based on the three-dimensional matching result, the on-chip processor 103 is further configured to determine an absolute phase related to the three-dimensional matching according to the gray code encoding pattern and one phase-shifted image, and further perform three-dimensional matching based on the absolute phase. Specifically, gray code values may be first calculated from a gray code encoding pattern and a phase unwrapping order determined, and then a truncation phase may be calculated based on the gray code values and a phase shifted image to determine an absolute phase associated with three-dimensional matching from the phase unwrapping order and the truncation phase.
In one exemplary scenario, let us say a Gray code coding pattern as
Figure 476345DEST_PATH_IMAGE001
In which>
Figure 581704DEST_PATH_IMAGE002
And N represents the number of projected gray code encodings. Generally, the larger the value of N, the larger the number of projection periods, so that the spatial distance per period is smaller. N Gray code coding pattern->
Figure 265626DEST_PATH_IMAGE001
Can be used for determining the maximum->
Figure 213860DEST_PATH_IMAGE003
And (4) carrying out phase unwrapping. In one embodiment, the gray code value may be represented by the following formula:
Figure 602116DEST_PATH_IMAGE004
(1)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE005
represents a gray coded value, is asserted>
Figure 627228DEST_PATH_IMAGE006
Represents->
Figure DEST_PATH_IMAGE007
Where the acquired code values, N represents the number of gray code coding patterns projected. Preferably, N may be 6, i.e. 6 gray-code coding patterns are projected, gray coding may be performed and the 6 selected gray-code coding patterns->
Figure 216472DEST_PATH_IMAGE008
Figure DEST_PATH_IMAGE009
Can be used for determining the maximum->
Figure 448739DEST_PATH_IMAGE010
And (4) carrying out phase unwrapping.
In one implementation scenario, the above
Figure 47211DEST_PATH_IMAGE007
Takes the coded value collected->
Figure DEST_PATH_IMAGE011
May be determined according to the brightness values of the light and dark stripes. In particular, this position is first calculated>
Figure 782955DEST_PATH_IMAGE007
The mean value of a corresponding number of gray code coding patterns->
Figure 74259DEST_PATH_IMAGE012
Then based on each gray code coding pattern at this position->
Figure 997084DEST_PATH_IMAGE007
Brightness value and average value of
Figure 727143DEST_PATH_IMAGE012
In order to determine a magnitude relationship between>
Figure 536967DEST_PATH_IMAGE007
Takes the coded value collected->
Figure 327068DEST_PATH_IMAGE011
. For example, currently the firstiIn which position a zhanggol code coding pattern>
Figure 734260DEST_PATH_IMAGE007
In which the brightness value is greater than the mean value->
Figure DEST_PATH_IMAGE013
Then>
Figure 408955DEST_PATH_IMAGE014
(ii) a Is at present the firstiIn which position a zhanggol code coding pattern>
Figure 463499DEST_PATH_IMAGE007
Is below the mean value->
Figure 549136DEST_PATH_IMAGE013
Then->
Figure DEST_PATH_IMAGE015
. Based on the aforementioned obtained>
Figure 462865DEST_PATH_IMAGE007
To the collected coded value
Figure 534726DEST_PATH_IMAGE011
And then determines a gray-coded value based on the formula (1)>
Figure 833989DEST_PATH_IMAGE016
Thereafter, a phase expansion level corresponding thereto can be determined>
Figure 700314DEST_PATH_IMAGE017
After obtaining the gray-coded values, a truncation phase may be calculated based on the gray-coded values and a phase-shifted image. In one embodiment, a corresponding phase shift model may be determined from a phase shifted image, whereAt least a constant parameter is included in the phase shift model, and then the constant parameter is calculated based on the gray code value to calculate the truncation phase based on the constant parameter and the phase shift model. It will be appreciated that in conventional three-dimensional scanning devices, the three images are acquired simultaneously by a camera, for example by a projector projecting three sinusoidal intensity distribution patterns
Figure 835760DEST_PATH_IMAGE018
The corresponding conventional phase shift formula can be expressed as the following equation:
Figure 367105DEST_PATH_IMAGE019
(2)
wherein the content of the first and second substances,
Figure 396241DEST_PATH_IMAGE020
which represents a phase shift, is shown,nrepresents the number of images acquired and n =1,2,3, ° v>
Figure DEST_PATH_IMAGE021
And &>
Figure 839991DEST_PATH_IMAGE022
Represents a constant parameter and corresponds to an ambient light parameter and a modulation degree parameter, respectively>
Figure DEST_PATH_IMAGE023
Indicating the truncated phase. In practical application scenarios, three images based on formula (2) and collected in combination are->
Figure 246090DEST_PATH_IMAGE024
Can obtain a truncated phase>
Figure 535120DEST_PATH_IMAGE023
Figure DEST_PATH_IMAGE025
(3)
As can be appreciated, the first and second,due to the inverse function of tan in the above equation (3)
Figure 605713DEST_PATH_IMAGE026
Has a value range of->
Figure DEST_PATH_IMAGE027
Thus, therefore, it is
Figure 220365DEST_PATH_IMAGE028
A periodic truncation occurs with a change of pixel, so that->
Figure 314092DEST_PATH_IMAGE023
Referred to as the truncated phase. In the embodiment of the present application, by acquiring, for example, 6 gray code encoding patterns, and the gray code value of the gray code encoding pattern is made up of 0 or 1, it can be considered that additional illumination information has been equivalently acquired during the projection acquisition of the gray code. Thus, the truncation phase may be calculated based on the gray-coded values and a phase-shifted image. Based on the method, the scanning times can be reduced, the measuring and calculating complexity can be reduced, the scanning speed and the operation efficiency are improved, and the scanning precision is ensured.
In one embodiment, with reference to equation (2) above, a phase shift model corresponding to a single respective image may be obtained. In one exemplary scenario, the corresponding phase shift model for a single respective image may be sub-represented by
Figure DEST_PATH_IMAGE029
Figure 203550DEST_PATH_IMAGE030
(4)
As can be seen from the foregoing, in view of the above,
Figure 331912DEST_PATH_IMAGE021
and &>
Figure DEST_PATH_IMAGE031
Represents a constant parameter, <' > is selected>
Figure 914203DEST_PATH_IMAGE032
Indicating the truncated phase. In one implementation scenario, a constant parameter ≦ may be calculated based on the Gray-coded value>
Figure 226717DEST_PATH_IMAGE021
And &>
Figure 982183DEST_PATH_IMAGE031
. For example, the minimum and maximum values of gray code values in the projected gray code acquisition pattern may first be determined according to the gray code values determined by equation (1) above. As an example, assuming that 6 gray code acquisition patterns are projected, the corresponding minimum and maximum values can be represented by the following sub-equations, respectively:
Figure DEST_PATH_IMAGE033
(5)
Figure 902735DEST_PATH_IMAGE034
(6)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE035
represents the minimum value of the gray code values of the 6 gray code acquisition patterns, is->
Figure 921507DEST_PATH_IMAGE036
Represents the maximum value of gray code values of 6 gray code acquisition patterns. Referring again to the conventional phase shift equation (2) described above, the value is based on>
Figure DEST_PATH_IMAGE037
Correspond to
Figure 927509DEST_PATH_IMAGE038
A-1 condition, and >>
Figure 955508DEST_PATH_IMAGE036
Corresponds to>
Figure DEST_PATH_IMAGE039
Case 1. Thus, the above equations (5) and (6) can be transformed into:
Figure 324041DEST_PATH_IMAGE040
(7)
Figure 982555DEST_PATH_IMAGE041
(8)
in the implementation scenario, the constant parameter can be solved according to the above equations (7) and (8)
Figure 275521DEST_PATH_IMAGE021
And &>
Figure 372790DEST_PATH_IMAGE022
Wherein
Figure 612141DEST_PATH_IMAGE042
, />
Figure 894087DEST_PATH_IMAGE043
And then combining the phase shift model (formula (4)) corresponding to a single corresponding image to obtain the truncation phase/combination>
Figure 546785DEST_PATH_IMAGE044
. Further, the absolute phase associated with the three-dimensional matching can be determined from the phase unwrapping order and the truncated phase obtained as described above. In one embodiment, assume that an absolute phase is recorded as +>
Figure 588691DEST_PATH_IMAGE045
The absolute phase associated with the three-dimensional match can then be determined by the following equation:
Figure 400658DEST_PATH_IMAGE046
(9)
wherein the content of the first and second substances,
Figure 400975DEST_PATH_IMAGE047
indicates the cutoff phase, is greater than or equal to>
Figure 275390DEST_PATH_IMAGE048
Indicating the phase unwrapping order. After the absolute phase is obtained, the position of each pixel point in the whole image can be determined, so that three-dimensional coordinate information is determined, and the facial expression of the scanned object is acquired.
As can be seen from the above description, in the embodiment of the present application, the gray code coding pattern projected by the projector is used to uniquely code the scanning space and to obtain at least one phase-shifted image, and the binocular camera is used to synchronously shoot the corresponding gray code coding pattern and one phase-shifted image to determine the absolute phase, so that three-dimensional matching is realized based on the absolute phase, and the facial expression of the scanned object is acquired. Therefore, the scanning speed of the three-dimensional scanning device and the calculation speed of measuring and calculating three-dimensional coordinate information can be improved, the requirement of the change of the facial expression of a scanned object is met, and the absolute phase is monotonically increased, so that three-dimensional matching based on the absolute phase as a matching amount has no ambiguity, the three-dimensional point cloud reconstruction can be accurately realized, and the three-dimensional scanning precision is improved.
In an implementation scenario, the projector, the binocular camera, and the on-chip processor in the three-dimensional scanning apparatus according to the embodiment of the present application may form an integrated apparatus, for example, integrated on a PC board. Compared with the large equipment of the traditional three-dimensional scanner, the three-dimensional scanning device of the embodiment of the application has the advantages of small volume, light weight, low cost and convenience for large-scale production.
Fig. 2 is an exemplary schematic diagram illustrating a three-dimensional scanning apparatus for acquiring facial expressions of a scanned object according to an embodiment of the application. It should be understood that the three-dimensional scanning apparatus depicted in fig. 2 is an embodiment of the three-dimensional scanning apparatus 100 depicted in fig. 1, and therefore the description of the three-dimensional scanning apparatus depicted in fig. 1 applies equally to fig. 2.
As shown in fig. 2, the projector 110, the binocular camera 120, and the on-chip processor (not shown) in the three-dimensional scanning apparatus according to the embodiment of the present application may be integrated on one PC board 210. Among them, the aforementioned projector 110 may be disposed at a middle position of the edge of the PC board 210 and disposed to protrude from the edge of the PC board 210, and the projector 110 may be disposed in a shape shown in the drawing. The aforementioned binocular cameras 120 may be disposed on the left and right sides on the PC board 210. As can be seen from the foregoing, the aforementioned projector 110 may include at least a turning mirror 220 and a line laser 230, and the line laser 230 may be disposed toward the aforementioned turning mirror 220 by a fixing bracket (not shown in the drawings). In some embodiments, the aforementioned projector 110 may be, for example, a MEMS structured light projector, which not only can solve the problem of matching between left and right cameras, but also has the advantages of small volume, high frame rate, and the like.
In one implementation scenario, the three-dimensional scanning apparatus of the embodiment of the present application is arranged to face the scanned object, so as to realize the projection of bright and dark stripes by the switch of the line laser 230 of the projector 110 and the synchronous rotation of the rotating mirror 220, so as to project gray code coding patterns to the facial expressions of the scanned object, and synchronously shoot corresponding gray code coding patterns and a phase-shifted image via the binocular camera 120 when the projector projects gray code coding patterns and phase-shifted images to the facial expressions of the scanned object. Then, an absolute phase related to three-dimensional matching is firstly determined according to the Gray code coding pattern and a phase shift image through an on-chip processor, and then three-dimensional matching is carried out based on the absolute phase.
Specifically, the gray code value may be determined using the above formula (1) based on the gray code encoding pattern first
Figure 104674DEST_PATH_IMAGE049
Based on the gray-coded value>
Figure 318618DEST_PATH_IMAGE050
The phase unwrapping order corresponding thereto can be determined
Figure 20995DEST_PATH_IMAGE051
. In addition, a gray-coded value is based on the determination>
Figure 566727DEST_PATH_IMAGE050
The corresponding phase shift pattern for a single respective image can be calculated by means of the formulae (4) to (8)>
Figure 278331DEST_PATH_IMAGE052
Then the truncated phase @, in combination with equation (2) above, is obtained>
Figure 815623DEST_PATH_IMAGE053
. Further, based on the previously obtained truncated phase +>
Figure 407010DEST_PATH_IMAGE054
The absolute phase associated with the three-dimensional matching can be obtained according to the above formula (9) to determine the position of each pixel point in the whole image. That is, three-dimensional coordinate information is determined to enable acquisition of facial expressions of the scanned object.
Fig. 3 is an exemplary diagram illustrating a projector projecting a gray code encoding pattern according to an embodiment of the present application. As shown in fig. 3, the projector 110 may include a rotating mirror 220 and a line laser 230, wherein the line laser 230 may be disposed toward the aforementioned rotating mirror 220 through a fixed bracket (not shown in the figure), and the projection of bright and dark stripes is achieved through the synchronous rotation of the switch of the line laser 230 of the projector 110 and the rotating mirror 220 to project a gray code encoding pattern to the facial expression of the scanned subject. In one embodiment, the turning mirror 220 may be controlled via an on-chip processor to turn according to a predetermined angle and generate a trigger signal, and one trigger signal is correspondingly transmitted for each turn of one angle.
Then, in response to the trigger signal, the line laser 230 is controlled to turn on or off via the on-chip processor to produce a bright and dark fringe projection, thereby causing the projector to project a gray code encoding pattern onto the facial expression of the scanned subject. Specifically, the line laser can be controlled to be turned on or off according to the fringes to be projected. When the line laser is turned on, bright stripes are produced at the corresponding angles, and when the line laser is turned off, dark stripes are produced at the corresponding angles, and the bright and dark stripes constitute a gray code encoding pattern. In an actual application scenario, the rotating mirror 220 completes one period of rotation after 1024 pulses, and further completes projection of a gray code encoding pattern. Preferably, 6 gray code coding patterns can be projected, enabling gray code coding.
It can be understood that the rotation of the rotating mirror is not a uniform angular velocity rotation, but a sinusoidal oscillation, so the embodiment of the present application proposes to add an angle sensor to the projector, which can be used to feed back the rotation angle of the rotating mirror to obtain a feedback result related to the rotation angle of the rotating mirror. Based on the rotation angle of the feedback rotating mirror, the position of the rotating mirror can be obtained. Further, because the rotating speed of the rotating mirror is high, the embodiment of the application also provides that allowance is reserved for controlling the rotating mirror so as to provide the advanced control quantity for accurately controlling the rotating mirror. In an embodiment, the above on-chip processor of the embodiment of the present application is further configured to: and constructing a rotation model related to the rotation mirror according to the feedback result so as to optimally control the projector based on the rotation model, thereby realizing the accurate control of the rotation mirror.
In an implementation scenario, the rotation model is associated with a rotation angle, a driving current of the projector, and a damping, and in optimally controlling the projector based on the rotation model processor type, the on-chip processor is further configured to: and calculating a conversion coefficient, a damping coefficient and delay time of the driving current and the rotation angle, and controlling the driving current and the on or off of the line laser according to the conversion coefficient, the damping coefficient, the delay time and the rotation angle so as to optimally control the projector. In one embodiment, the above-mentioned rotation model can be represented by the following equation:
Figure 521596DEST_PATH_IMAGE055
(10)
wherein the content of the first and second substances,
Figure 912258DEST_PATH_IMAGE056
indicates the rotation angle of the rotating mirror, and>
Figure 428690DEST_PATH_IMAGE057
represents the drive current of the projector, and>
Figure 190978DEST_PATH_IMAGE058
represents a conversion factor of the drive current and the angle of rotation>
Figure 199385DEST_PATH_IMAGE059
Representing the equivalent damping coefficient. That is, the rotation angle of the rotating mirror is changed by the driving current and the damping, and the feedback of the embodiment of the present application is the rotation angle->
Figure 643005DEST_PATH_IMAGE056
And a drive current
Figure 13944DEST_PATH_IMAGE060
With time differences, feedback is required to construct a rotation model associated with the rotating mirror to optimally control the projector based on the rotation model.
In one exemplary scenario, the drive current to rotation angle scaling factor k and equivalent damping factor may be measured based on the inherent frequency and response characteristics of the rotating mirror
Figure 901128DEST_PATH_IMAGE061
. Subsequently, a measurement delay time is set>
Figure 383450DEST_PATH_IMAGE062
And based on the above formula (10), based on the conversion coefficient k of the driving current and the rotation angle and the equivalent damping coefficient->
Figure 443810DEST_PATH_IMAGE063
Angle of rotation
Figure 669254DEST_PATH_IMAGE056
To determine the rotation angle and control the driving current->
Figure 976608DEST_PATH_IMAGE064
And the line laser is turned on or off for optimal control of the projector.
In an embodiment, the three-dimensional scanning apparatus of the embodiment of the application may further include a network module, and the on-chip processor may be further configured to transmit data to the terminal via the network module. In some embodiments, the network module may be, for example, an ethernet module, and may transmit data (e.g., phase-shifted images, gray code encoding patterns, three-dimensional coordinate information, etc.) to the terminal using a User Datagram Protocol ("UDP") based gigabit network. In an implementation scenario, after the on-chip processor collects data, the data are combined into a UDP data packet and sent to the network module, and the UDP data packet is communicated with the terminal through the network module.
Fig. 4 is a block diagram illustrating an exemplary structure of the whole three-dimensional scanning apparatus for acquiring facial expressions of a scanned object according to an embodiment of the present application. As shown in fig. 4, the three-dimensional scanning apparatus of the embodiment of the present application may include a projector, which may be a structured light projector of the MEMS 410, and which may include a turning mirror and a line laser. In an actual application scene, the rotating mirror can rotate according to a preset angle and generate a trigger signal, and based on the trigger signal, the line laser can be controlled to be turned on or turned off so as to correspondingly generate a bright stripe and a dark stripe, so that a gray code coding pattern is projected to the facial expression of a scanned object. In some embodiments, the aforementioned projector may further include an angle sensor to feed back the rotation angle (or position) of the rotating mirror in real time so as to precisely control the rotating mirror. Further, the aforementioned three-dimensional scanning apparatus may further include binocular cameras, such as the COMS 420 and the COMS 421, which may capture a gray code coding pattern and a phase-shifted image simultaneously when the projector projects the gray code coding pattern and the phase-shifted image to the facial expression of the scanned object, so as to implement binocular stereo matching to calculate depth information, thereby solving the three-dimensional coordinate information.
As further shown, the three-dimensional scanning device may further include an on-chip processor, and the on-chip processor may be an FPGA 430-based processor. From the foregoing, the FPGA 430 can be used to control the switch of the line laser and the synchronous rotation of the rotating mirror in the structured light projector of the MEMS 410 to realize the projection of the bright and dark stripes, thereby realizing the projection of the gray code encoding pattern. In addition, the FPGA 430 can also receive a phase shift image and a gray code encoding pattern which are synchronously shot by the COMS 420 and the COMS 421 for three-dimensional matching, so as to determine three-dimensional coordinate information based on a three-dimensional matching result, and realize acquisition of the facial expression of the scanned object. For more details on the three-dimensional matching, reference may be made to the description in fig. 1 to fig. 3, and the description of the present application is not repeated herein. In addition, the three-dimensional scanning device of the embodiment of the present application may further include a network module, such as an ethernet 440. The FPGA 430 can communicate with the terminal 450 via the ethernet 440 at a high speed, for example, transmitting the phase-shifted image, the gray code pattern, or the three-dimensional coordinate information to the terminal 450 for subsequent use.
Fig. 5 is an exemplary flow diagram illustrating a three-dimensional scanning method 500 for acquiring facial expressions of a scanned object according to an embodiment of the application. As shown in fig. 5, at step 510, a gray code encoding pattern and at least one phase-shifted image are projected to a facial expression of a scanned object using a projector. In one embodiment, the projector may be a structured light projector, such as a MEMS, and the projector may include a turning mirror and a line laser, and may further include an angle sensor, for example. In some embodiments, the projection of bright and dark stripes is achieved via the synchronous rotation of the switch of the line laser and the rotating mirror, so that the gray code coding pattern is projected, and the rotating angle of the rotating mirror can be fed back in real time through angle sensing so as to accurately control the rotating mirror.
Next, at step 520, a binocular camera is used to synchronously capture corresponding gray code coding patterns and phase shifted images as the projector projects the gray code coding patterns and phase shifted images to the facial expression of the scanned object. Based on the gray code encoding pattern and a phase shifted image obtained as described above, at step 530, a three-dimensional match is made using the on-chip processor based on the gray code encoding pattern and a phase shifted image. In one embodiment, the absolute phase associated with the three-dimensional matching may be first determined from the gray code encoding pattern and a phase-shifted image, and the three-dimensional matching may be performed based on the absolute phase. More specifically, gray code values may be first calculated from a gray code encoding pattern and a phase unwrapping order may be determined, and then a truncated phase may be calculated based on the gray code values and a phase shift image to determine an absolute phase associated with three-dimensional matching based on the phase unwrapping order and the truncated phase, and three-dimensional matching may be performed based on the absolute phase. For more details on the three-dimensional matching, reference may be made to the content described in fig. 1 to fig. 3, and details of the present application are not repeated herein. Finally, at step 540, three-dimensional coordinate information is determined based on the three-dimensional matching results to enable acquisition of facial expressions of the scanned object.
In one embodiment, the present application also provides a head-mounted three-dimensional scanning device, which may include a helmet and the three-dimensional scanning device of the present application. In one implementation scenario, the three-dimensional scanning device may be fixed on a helmet, and the three-dimensional scanning device is disposed toward the face of the scanned object to scan the facial expression changes of the scanned object in a motion state.
Fig. 6 is a block diagram illustrating an exemplary structure of an apparatus 600 for acquiring facial expressions of a scanned object according to an embodiment of the present application. It will be appreciated that the device implementing aspects of the subject application may be a single device (e.g., a computing device) or a multifunction device including various peripheral devices.
As shown in fig. 6, the apparatus of the present application may include a central processing unit or central processing unit ("CPU") 611, which may be a general purpose CPU, a dedicated CPU, or other execution unit that processes and programs run. Further, device 600 can also include a mass memory 612 and read only memory ("ROM") 613, wherein the mass memory 612 can be configured to store various types of data, including various types of data related to Gray code encoding patterns, phase-shifted images, algorithm data, intermediate results, and various programs needed to operate device 600. The ROM 613 may be configured to store power-on self-test for the device 600, initialization of various functional modules in the system, drivers for basic input/output of the system, and data and instructions required to boot the operating system.
Optionally, device 600 may also include other hardware platforms or components, such as a tensor processing unit ("TPU") 614, a graphics processing unit ("GPU") 615, a field programmable gate array ("FPGA") 616, and a machine learning unit ("MLU") 617 as shown. It is understood that although various hardware platforms or components are shown in device 600, this is for illustration purposes only and is not intended to be limiting, as those skilled in the art will appreciate that corresponding hardware may be added or removed in accordance with actual needs. For example, the device 600 may include only a CPU, associated memory devices, and an interface device to implement the three-dimensional scanning method for acquiring facial expressions of a scanned object of the present application.
In some embodiments, to facilitate the transfer and interaction of data with external networks, the device 600 of the present application further includes a communication interface 618 such that it may be connected to a local area network/wireless local area network ("LAN/WLAN") 605 via the communication interface 618, and in turn may be connected to a local server 606 via the LAN/WLAN or to the Internet ("Internet") 607. Alternatively or additionally, device 600 of the present application may also be directly connected to the internet or cellular network via communication interface 618 based on wireless communication technology, such as 3 rd generation ("3G"), 4 th generation ("4G"), or 5 th generation ("5G"). In some application scenarios, the device 600 of the present application may also access the server 608 and database 609 of the external network as needed to obtain various known algorithms, data, and modules, and may remotely store various data, such as various types of data or instructions for rendering, for example, gray code encoding patterns and phase shifted images.
The peripheral devices of the apparatus 600 may include a display device 602, an input device 603, and a data transfer interface 604. In one embodiment, the display device 602 may, for example, include one or more speakers and/or one or more visual displays configured to provide voice prompts and/or image video displays of facial expressions of the scanned subject of the present application. Input device 603 may include other input buttons or controls, such as a keyboard, a mouse, a microphone, a gesture capture camera, etc., configured to receive input of audio data and/or user instructions. The data transfer interface 604 may include, for example, a serial interface, a parallel interface, or a universal serial bus interface ("USB"), a small computer system interface ("SCSI"), serial ATA, fireWire ("FireWire"), PCI Express, and a high-definition multimedia interface ("HDMI"), which are configured for data transfer and interaction with other devices or systems. In accordance with aspects of the subject application, the data transmission interface 604 may receive phase shifted images from a binocular camera acquisition and transmit data or results including the phase shifted images or various other types of data to the device 600.
The aforementioned CPU 611, mass memory 612, ROM 613, TPU 614, GPU 615, FPGA 616, MLU 617 and communication interface 618 of the device 600 of the present application may be interconnected via a bus 619 and enable data interaction with peripheral devices via the bus. Through the bus 619, the cpu 611 may control other hardware components and their peripherals in the device 600, in one embodiment.
An apparatus for acquiring facial expressions of a scanned subject that may be used to carry out the present application is described above in connection with fig. 6. It is to be understood that the device structures or architectures herein are merely exemplary, and that the implementations and entities of the present application are not limited thereto but may be varied without departing from the spirit of the application.
From the above description in conjunction with the accompanying drawings, those skilled in the art will also understand that the embodiments of the present application can also be implemented by software programs. The present application thus also provides a computer program product. The computer program product may be used to implement the three-dimensional scanning method for acquiring facial expressions of a scanned object described in conjunction with fig. 5.
It should be noted that although the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, in order to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
It should be understood that when the terms first, second, third, fourth, etc. are used in the claims of this application, in the description and in the drawings, they are used only to distinguish one object from another, and not to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the application. As used in the specification and claims of this application, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this application refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
Although the embodiments of the present application are described above, the descriptions are only examples for facilitating understanding of the present application and are not intended to limit the scope and application scenarios of the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.

Claims (7)

1. A three-dimensional scanning device for acquiring facial expressions of a scanned subject, comprising:
a projector for projecting a gray code encoding pattern and at least one phase-shifted image to a facial expression of a scanned object;
a binocular camera for synchronously photographing the corresponding gray code coding pattern and phase-shifted image when the projector projects the gray code coding pattern and the phase-shifted image to the facial expression of the scanned object;
an on-chip processor communicatively coupled with the projector and the binocular camera and configured to:
performing three-dimensional matching according to the Gray code coding pattern and one phase shift image; and
determining three-dimensional coordinate information based on the three-dimensional matching result to realize acquisition of facial expressions of the scanned object,
wherein in three-dimensional matching according to the Gray code encoding pattern and one of the phase-shifted images, the on-chip processor is further configured to:
calculating a gray code value from the gray code coding pattern and determining a phase unwrapping order;
determining a corresponding phase shift model according to one phase shift image, wherein the phase shift model at least comprises constant parameters;
calculating the constant parameter based on the golay code value;
calculating a truncated phase based on the constant parameter and the phase shift model;
determining an absolute phase associated with three-dimensional matching according to the phase unwrapping order and the truncated phase; and
and performing three-dimensional matching based on the absolute phase.
2. The three-dimensional scanning device of claim 1, wherein the projector comprises at least a turning mirror and a line laser, the on-chip processor further to:
controlling the rotating mirror to rotate according to a preset angle and generating a trigger signal; and
and controlling the line laser to be switched on or off to generate light and dark stripe projection in response to the trigger signal so as to control the projector to project a Gray code coding pattern and at least one phase-shift image to the facial expression of the scanned object.
3. The three-dimensional scanning device of claim 2, wherein the projector further comprises an angle sensor for feeding back a rotation angle of the rotating mirror to obtain a feedback result, the on-chip processor further for:
constructing a rotation model related to the rotation mirror according to the feedback result; and
and optimally controlling the projector based on the rotation model.
4. The three-dimensional scanning device of claim 3, wherein the rotation model is associated with the rotation angle, a drive current of the projector, and a damping, and in optimally controlling the projector based on the rotation model, the on-chip processor is further configured to:
calculating a conversion coefficient, a damping coefficient and delay time of the driving current and the rotation angle; and
and controlling the driving current and the line laser to be switched on or switched off according to the conversion coefficient, the damping coefficient, the delay time and the rotation angle so as to optimally control the projector.
5. The three-dimensional scanning device of claim 1, further comprising a network module, the on-chip processor being further configured to transmit data to a terminal via the network module.
6. A three-dimensional scanning method for acquiring facial expressions of a scanned subject, comprising:
projecting a gray code encoding pattern and at least one phase-shifted image to a facial expression of a scanned object using a projector;
using a binocular camera to synchronously shoot corresponding Gray code coding patterns and phase shift images when the projector projects the Gray code coding patterns and the phase shift images to the facial expressions of the scanned objects;
calculating a gray code value according to the gray code coding pattern and determining a phase unwrapping order;
determining a corresponding phase shift model according to one phase shift image, wherein the phase shift model at least comprises constant parameters;
calculating the constant parameter based on the golay code value;
calculating a truncated phase based on the constant parameter and the phase shift model;
determining an absolute phase associated with three-dimensional matching from the phase unwrapping stage and the truncated phase;
performing three-dimensional matching based on the absolute phase; and
and determining three-dimensional coordinate information based on the three-dimensional matching result so as to realize acquisition of the facial expression of the scanned object.
7. A head-mounted three-dimensional scanning device, comprising:
a helmet; and
a three dimensional scanning device according to any one of claims 1 to 5.
CN202310005920.1A 2023-01-04 2023-01-04 Three-dimensional scanning device for acquiring facial expression of scanned object Active CN115670392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310005920.1A CN115670392B (en) 2023-01-04 2023-01-04 Three-dimensional scanning device for acquiring facial expression of scanned object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310005920.1A CN115670392B (en) 2023-01-04 2023-01-04 Three-dimensional scanning device for acquiring facial expression of scanned object

Publications (2)

Publication Number Publication Date
CN115670392A CN115670392A (en) 2023-02-03
CN115670392B true CN115670392B (en) 2023-04-07

Family

ID=85057066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310005920.1A Active CN115670392B (en) 2023-01-04 2023-01-04 Three-dimensional scanning device for acquiring facial expression of scanned object

Country Status (1)

Country Link
CN (1) CN115670392B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109186476A (en) * 2018-10-26 2019-01-11 广东工业大学 A kind of color structured light method for three-dimensional measurement, device, equipment and storage medium
CN113012277A (en) * 2021-02-03 2021-06-22 中国地质大学(武汉) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method
CN115451860A (en) * 2022-08-09 2022-12-09 成都飞机工业(集团)有限责任公司 Phase shift three-dimensional measurement method based on gray level multiplexing Gray code

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2534617B2 (en) * 1993-07-23 1996-09-18 株式会社エイ・ティ・アール通信システム研究所 Real-time recognition and synthesis method of human image
CN104315996B (en) * 2014-10-20 2018-04-13 四川大学 The method that Fourier transform profilometry is realized with binary coding strategy
CA2945256C (en) * 2016-10-13 2023-09-05 Lmi Technologies Inc. Fringe projection for in-line inspection
US10935376B2 (en) * 2018-03-30 2021-03-02 Koninklijke Philips N.V. System and method for 3D scanning
CN114723828B (en) * 2022-06-07 2022-11-01 杭州灵西机器人智能科技有限公司 Multi-line laser scanning method and system based on binocular vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109186476A (en) * 2018-10-26 2019-01-11 广东工业大学 A kind of color structured light method for three-dimensional measurement, device, equipment and storage medium
CN113012277A (en) * 2021-02-03 2021-06-22 中国地质大学(武汉) DLP (digital light processing) -surface-based structured light multi-camera reconstruction method
CN115451860A (en) * 2022-08-09 2022-12-09 成都飞机工业(集团)有限责任公司 Phase shift three-dimensional measurement method based on gray level multiplexing Gray code

Also Published As

Publication number Publication date
CN115670392A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN104956404B (en) It is rebuild with the real-time three-dimensional that power effective depth sensor uses
Kolb et al. ToF-sensors: New dimensions for realism and interactivity
US10545215B2 (en) 4D camera tracking and optical stabilization
CN108475327A (en) three-dimensional acquisition and rendering
CN108053435A (en) Dynamic realtime three-dimensional rebuilding method and system based on handheld mobile device
EP2343685B1 (en) Information processing device, information processing method, program, and information storage medium
CN109903377B (en) Three-dimensional face modeling method and system without phase unwrapping
Mossel et al. Streaming and exploration of dynamically changing dense 3d reconstructions in immersive virtual reality
CN113160298B (en) Depth truth value acquisition method, device and system and depth camera
EP4070177B1 (en) Systems and methods for providing a mixed-reality pass-through experience
JP2023515669A (en) Systems and Methods for Depth Estimation by Learning Sparse Point Triangulation and Densification for Multiview Stereo
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
JP2003202216A (en) Method, device, system and program for three-dimensional image processing
CN113711276A (en) Scale-aware monocular positioning and mapping
KR20150068895A (en) Apparatus and method for generating three dimensional output data
CN107820019A (en) Blur image acquiring method, device and equipment
CN111612878A (en) Method and device for making static photo into three-dimensional effect video
Vishniakou et al. Virtual reality for animal navigation with camera-based optical flow tracking
CN110969706B (en) Augmented reality device, image processing method, system and storage medium thereof
CN111476907A (en) Positioning and three-dimensional scene reconstruction device and method based on virtual reality technology
US11182951B2 (en) 3D object modeling using scale parameters and estimated distance
CN207281829U (en) A kind of vision system based on photometric stereo vision
CN115670392B (en) Three-dimensional scanning device for acquiring facial expression of scanned object
Sergiyenko et al. Multi-view 3D data fusion and patching to reduce Shannon entropy in Robotic Vision
CN108961378A (en) A kind of more mesh point cloud three-dimensional rebuilding methods, device and its equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant