CN108492357A - A kind of 3D 4 D datas acquisition method and device based on laser - Google Patents
A kind of 3D 4 D datas acquisition method and device based on laser Download PDFInfo
- Publication number
- CN108492357A CN108492357A CN201810152236.5A CN201810152236A CN108492357A CN 108492357 A CN108492357 A CN 108492357A CN 201810152236 A CN201810152236 A CN 201810152236A CN 108492357 A CN108492357 A CN 108492357A
- Authority
- CN
- China
- Prior art keywords
- target object
- module
- point cloud
- image data
- laser
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Abstract
The present invention provides a kind of 3D 4 D datas acquisition method and device based on laser.This method includes:Step 1, the image data for the target object that the camera of the laser scanning module of current acquisition position takes is obtained;Step 2, the image data of target object is positioned, judges target object whether in scheduled position;Step 3, judging target object in the case of scheduled position, multiple moves are sent to laser scanning module successively;Step 4, the image data for obtaining the target object that camera takes in each acquisition position being altered to generates the point cloud data of target object according to the image data for the target object that each acquisition position takes;Step 5, the characteristic point cloud information of target object is extracted from the point cloud data of target object, and carries out characteristic point distance calibration;Step 6, the calibration distance that feature based point distance calibration obtains, synthesizes point cloud data, obtains the 3D four-dimension model datas of target object.
Description
Technical field
The present invention relates to image technique field, especially a kind of 3D 4 D datas acquisition method and device based on laser.
Background technology
Biological characteristic is the intrinsic physiology or behavioural characteristic of biology, such as fingerprint, palmmprint, iris or face.Biological characteristic
There are certain uniqueness and stability, i.e., the diversity ratio between certain biological characteristic of any two biology is larger, and biological characteristic
It will not generally change a lot with the time, this allows for biological characteristic and is well suited for applying in authentication or system
In the scenes such as authentication information.
Current biological attribute data is all the 2D data of space plane, related by taking the biological characteristic of head face as an example
The data application of head face all rests on simple picture using upper, i.e., can only come from some specific angle to head face
Data are handled and otherwise application;Again by taking the biological characteristic in finger portion as an example, traditional fingerprint image acquisition equipment
It is that hand is in contact with collecting device surface, fingerprint image will be obtained in the collection surface of equipment after then applying pressure,
Pressure applied is in different size when being used due to everyone, and the position that each hand is contacted with collection surface is also different,
These can all influence the quality of fingerprint image so that the efficiency of fingerprint declines.
Therefore, there is an urgent need for being directed to biological characteristic, provide that a kind of speed is fast, error is small and measures accurate 3D 4 D datas acquisition
Scheme.
Invention content
In view of the above problems, it is proposed that the present invention overcoming the above problem in order to provide one kind or solves at least partly
State the 3D 4 D datas acquisition method based on laser of problem and corresponding device.
One side according to the ... of the embodiment of the present invention provides a kind of 3D 4 D data acquisition methods based on laser, including:
Step 1, the laser beam projects of laser transmitting of the laser scanning module of current acquisition position are obtained to object
Body is reflected on the camera of the laser scanning module, the image data for the target object that the camera takes;
Step 2, the image data of the target object is positioned, judges the target object whether in scheduled position
It sets;
Step 3, it judging the target object in the case of scheduled position, is sent out successively to the laser scanning module
Multiple moves are sent, indicate the laser scanning module change acquisition position;
Step 4, the picture number for the target object that the camera takes in each acquisition position being altered to is obtained
According to according to the image data for the target object that each acquisition position takes, generating the point cloud of the target object
Data;
Step 5, the characteristic point cloud information of the target object is extracted from the point cloud data of the target object, and according to
The characteristic point cloud information of extraction carries out characteristic point distance calibration;
Step 6, the calibration distance obtained based on the characteristic point distance calibration, synthesizes the point cloud data, obtains
To the 3D four-dimension model datas of the target object.
Optionally, judging the target object not in the case of scheduled position, the method further includes:
According to the image data to the target object positioned as a result, determining the carrying for carrying the target object
Equipment needs mobile direction;
Control instruction is sent to the load bearing equipment, indicates that the load bearing equipment needs mobile direction to move to described,
Return to step 1.
Optionally, the target object includes:The face of human body and/or head.
Optionally, the target object is judged in step 2 whether in scheduled position, including:To the target object
Image data is identified, and the profile of the face and/head that judge human body described in the image data of the target object is
It is no complete, if completely, it is determined that the target object is in scheduled position.
Optionally, after step 1, the method further includes:The figure for the target object that the camera is taken
It is shown as data are sent to guiding display screen.
Optionally, the target object includes:The hand of human body.
Optionally, the hand of the human body includes:Finger portion and/or metacarpus.
Optionally, the target object is judged in step 2 whether in scheduled position, including:According to the target object
Image data current acquisition position is identified, judge whether the current acquisition position is hand, if
It is, it is determined that the target object is in scheduled position.
Optionally, multiple moves are sent to the laser scanning module successively, indicates that the laser scanning module becomes
More acquisition position, including:Multiple moves are sent to the laser scanning module successively, indicate the laser scanning module point
Do not become more acquisition angles and acquisition is scanned to the acquisition position.
Optionally, the step 5 includes:
The point cloud data scanned from each acquisition position is pre-processed respectively, wherein the pretreatment packet
Include at least one of:Noise reduction process, smoothing processing and visualization processing;
The characteristic point cloud information of the target object is extracted from pretreated each point cloud data respectively;
According to the characteristic point cloud information, the distance of feature point for calibration obtains tetra- dimension modules of 3D of the target object
Key dimension.
Other side according to the ... of the embodiment of the present invention provides a kind of 3D 4 D data harvesters based on laser,
Including:
Image data acquisition module, the laser of the laser transmitting of the laser scanning module for obtaining current acquisition position
Beam projects target object and is reflected on the camera of the laser scanning module, the target object that the camera takes
Image data;
Locating module is positioned for the image data to the target object, judge the target object whether
Scheduled position;
Mobile control module, for judging the target object in the case of scheduled position, successively to described sharp
Optical scanning module sends multiple moves, indicates the laser scanning module change acquisition position;
Point cloud generation module, the object taken in each acquisition position being altered to for obtaining the camera
The image data of body generates the target according to the image data for the target object that each acquisition position takes
The point cloud data of object;
Distance calibration module, the characteristic point cloud for extracting the target object from the point cloud data of the target object
Information, and according to the characteristic point cloud information of extraction, carry out characteristic point distance calibration;
3D synthesis modules 960, the calibration distance for being obtained based on the characteristic point distance calibration, to the point cloud data
It is synthesized, obtains the 3D four-dimension model datas of the target object.
Optionally, mobile control module is additionally operable to judging the target object not in the case of scheduled position, root
According to the image data to the target object positioned as a result, determining that the load bearing equipment for carrying the target object needs to move
Dynamic direction, and control instruction is sent to the load bearing equipment, indicate that the load bearing equipment needs mobile direction to move to described
It is dynamic, then trigger described image data acquisition module.
Optionally, the target object includes:The face of human body and/or head;
Whether the locating module judges the target object in scheduled position in the following manner:To the object
The image data of body is identified, and judges the wheel of the face and/head of human body described in the image data of the target object
It is wide whether complete, if completely, it is determined that the target object is in scheduled position.
Optionally, further include:Guide display module, the picture number of the target object for taking the camera
It is shown according to guiding display screen is sent to.
Optionally, the target object includes:The hand of the hand of human body, the human body includes:Finger portion and/or metacarpus;
Whether the locating module judges the target object in scheduled position in the following manner:According to the target
Current acquisition position is identified in the image data of object, judges whether the current acquisition position is hand,
If it is, determining the target object in scheduled position.
Optionally, the mobile control module sends multiple movements to the laser scanning module successively in the following way
Instruction:Multiple moves are sent to the laser scanning module successively, indicate that the laser scanning module becomes more respectively
Acquisition angles are scanned acquisition to the acquisition position.
Optionally, the distance calibration module carries out distance calibration in the following manner:
The point cloud data scanned from each acquisition position is pre-processed respectively, wherein the pretreatment packet
Include at least one of:Noise reduction process, smoothing processing and visualization processing;
The characteristic point cloud information of the target object is extracted from pretreated each point cloud data respectively;
According to the characteristic point cloud information, the distance of feature point for calibration obtains tetra- dimension modules of 3D of the target object
Key dimension.
An embodiment of the present invention provides a kind of 3D 4 D datas acquisition method and device based on laser, is obtained in the method
It takes on laser transmitting laser beam projects to target object back reflection to camera, the picture number for the target object that camera takes
According to positioning to image data, determining target object in the case of scheduled position, repeatedly sent out to laser scanning module
Send move, instruction laser scanning module changes acquisition position, is taken in multiple acquisition positions to getting camera
Image data carries out integration modeling to the image data that multiple acquisition positions take, to which the 3D for obtaining target object is four-dimensional
Model data completes the reconstruction of target object.Due to laser scanning module can be used one or more lasers and one or
Multiple cameras, and overall reflective, no dead angle are carried out to target object by laser so that camera shoots the photographic quality come
Higher improves accuracy so that data are easier.
Above description is only the general introduction of technical solution of the present invention, in order to better understand the technical means of the present invention,
And can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can
It is clearer and more comprehensible, below the special specific implementation mode for lifting the present invention.
According to the following detailed description of specific embodiments of the present invention in conjunction with the accompanying drawings, those skilled in the art will be brighter
The above and other objects, advantages and features of the present invention.
Description of the drawings
By reading the detailed description of hereafter preferred embodiment, various other advantages and benefit are common for this field
Technical staff will become clear.Attached drawing only for the purpose of illustrating preferred embodiments, and is not considered as to the present invention
Limitation.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 shows the flow chart of the 3D 4 D data acquisition methods according to an embodiment of the invention based on laser;
The head face 3D 4 D data acquisition systems based on ray laser that Fig. 2 shows according to an embodiment of the invention
Configuration diagram;
Fig. 3 shows the mould of the head face 3D 4 D data acquisition systems according to an embodiment of the invention based on laser
Block structure schematic diagram;
Fig. 4 shows the work of the head face 3D 4 D data acquisition systems according to an embodiment of the invention based on laser
Make flow chart;
Fig. 5 shows that the framework of the hand 3D 4 D data acquisition systems according to an embodiment of the invention based on laser shows
It is intended to;
Fig. 6 shows the module knot of the hand 3D 4 D data acquisition systems according to an embodiment of the invention based on laser
Structure schematic diagram;
Fig. 7 shows the specific implementation structural schematic diagram of sliding rail and sliding rail control module according to an embodiment of the invention;
Fig. 8 shows the workflow of the hand 3D 4 D data acquisition systems according to an embodiment of the invention based on laser
Cheng Tu;And
Fig. 9 shows the structural representation of the 3D 4 D data harvesters according to an embodiment of the invention based on laser
Figure.
Specific implementation mode
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
In order to solve the above technical problems, an embodiment of the present invention provides a kind of 3D 4 D datas acquisition side based on laser
Method.
3D 4 D datas in the present invention refer to that three-dimensional space data binding time dimension data is formed by data, three-dimensional
Space binding time dimension refers to:Multiple same time intervals or different time intervals, different angle, different direction or different shapes
The data acquisition system that the image or image of situations such as state is formed.
Fig. 1 shows the flow chart of the 3D 4 D data acquisition methods according to an embodiment of the invention based on laser.Such as
Shown in Fig. 1, this method may comprise steps of S102 to step S112.
Step S102 obtains the laser beam projects of laser transmitting of the laser scanning module of current acquisition position to target
Object is reflected on the camera of the laser scanning module, the image data for the target object that the camera takes.
In a particular application, it is alternatively possible to a transmission module be arranged on camera, which can be wired
External interface for example, USB etc. can also be wireless external interface, for example, bluetooth etc..Image data is taken in camera
After, it transmits outward, so as to get the image data that camera takes.
Whether step S104 positions the image data of the target object, judge the target object predetermined
Position.
In the optional embodiment of the present invention, for the ease of, get image data that camera takes it
Afterwards, the image data received can be decoded, is converted into predetermined pictures format, such as JPG formats.In step S104
In, transformed image data is positioned.
Step S106, judging the target object in the case of scheduled position, successively to the laser scanning mould
Block sends multiple moves, indicates the laser scanning module change acquisition position.
In a particular application, laser scanning module can be set and change acquisition position according to the target object of actual scanning
Rule, send move instruction laser scanning module be altered to corresponding acquisition position.
Step S108 obtains the image for the target object that the camera takes in each acquisition position being altered to
Data generate the point of the target object according to the image data for the target object that each acquisition position takes
Cloud data.
It, can be first by camera in each acquisition position in step S108 in the optional embodiment of the present invention
The data taken are converted to predetermined pictures format, and the image data of several predetermined pictures formats is then synthesized piece image number
According to will handle the width image data, and obtain the point cloud data of target object.In fusion, may be used based on spatial domain
Gradient difference point-score, Method of Partitioning, logical filters method, weighted mean method, Mathematical Morphology method, image algebra method, simulated annealing;
And laplacian pyramid method based on frequency domain, Wavelet Transform, pyramid diagram as fusion method, carry out image co-registration etc..
Wherein, the spatial positional information and colouring information of characteristic point, the format of point cloud data can be included in point cloud data
It can be as follows:
X1 Y1 Z1 R1 G1 B1 A1
X2 Y2 Z2 R2 G2 B2 A2
……
Xn Yn Zn Rn Gn Bn An
Wherein, X axis coordinate of the Xn expression characteristic points in spatial position;Y axis coordinate of the Yn expression characteristic points in spatial position;
Z axis coordinate of the Zn expression characteristic points in spatial position;Rn indicates the value in the channels R of the colouring information of characteristic point;Gn indicates feature
The value in the channels G of the colouring information of point;Bn indicates the value of the channel B of the colouring information of characteristic point;An indicates the color of characteristic point
The value in the channels Alpha of information.
Step S110 extracts the characteristic point cloud information of the target object from the point cloud data of the target object, and
According to the characteristic point cloud information of extraction, characteristic point distance calibration is carried out.
Step S112 synthesizes the point cloud data based on the calibration distance that the characteristic point distance calibration obtains,
Obtain the 3D four-dimension model datas of the target object.
The above-mentioned 3D 4 D data acquisition methods based on laser provided through the embodiment of the present invention, by obtaining laser
Emit on laser beam projects to target object back reflection to camera, the image data for the target object that camera takes, to image
Data are positioned, and determining target object in the case of scheduled position, are repeatedly sent movement to laser scanning module and are referred to
It enabling, instruction laser scanning module changes acquisition position, to get the image data that camera takes in multiple acquisition positions,
Integration modeling is carried out to the image data that multiple acquisition positions take, to obtain the 3D four-dimension model datas of target object,
Complete the reconstruction of target object.Since one or more lasers and one or more cameras, phase can be used in laser scanning module
Machine can be zoom camera or fixed-focus camera, and carry out overall reflective, no dead angle so that camera to target object by laser
The photographic quality higher come is shot, so that data are easier, improves accuracy.
In an optional embodiment of the embodiment of the present invention, judging the target object not in scheduled position
In the case of, this method can also include:It is carried according to what the 3D 4 D datas to the target object were positioned as a result, determining
The load bearing equipment of the target object needs mobile direction;Control instruction is sent to the load bearing equipment, indicates the carrying
Equipment needs mobile direction to move to described, return to step S102.By in the optional embodiment, not existing in target object
In the case of scheduled position, the load bearing equipment that can control carrying target object is moved, and new position is then resurveyed
Under target object image, then whether target object is judged in scheduled position, until confirming target object in scheduled position
It is set to only.In an embodiment of the present invention, whether scheduled position can acquire complete determination according to target object.
In an alternate embodiment of the present invention where, target object may include:The face of human body and/or head.
In an alternate embodiment of the present invention where, in the face and/or head that target object is human body, step S104
It is middle judge the target object whether in scheduled position may include:The image data of the target object is identified,
Whether the profile of the face and/head that judge human body described in the image data of the target object is complete, if completely,
Determine the target object in scheduled position.
In the optional embodiment of the present invention, in order to guide user to determine mobile direction, step S102 it
Afterwards, this method further includes:The image data for the target object that camera takes is sent to guiding display screen to show.Specifically answering
In, the picture format that can be converted to the image data for the target object that camera takes convenient for display is then forwarded to guiding
Display screen is shown.
In an optional embodiment of the embodiment of the present invention, target object includes:The hand of human body.
In an alternate embodiment of the present invention where, further, the hand of the human body includes:Finger portion and/or metacarpus.
Captured by camera can be finger portion, metacarpus and associated texture information, such as fingerprint palmmprint of hand etc..
In an optional embodiment of the embodiment of the present invention, in the case where target object is the hand of human body, step
Judge in rapid S104 the target object whether in scheduled position may include:According to the image data pair of the target object
Current acquisition position is identified, and judges whether the current acquisition position is hand, if it is, described in determining
Target object is in scheduled position.
In an alternate embodiment of the present invention where, in the case where target object is the hand of human body, successively to described
Laser scanning module sends multiple moves, indicates that the laser scanning module change acquisition position may include:Successively to
The laser scanning module sends multiple moves, indicates that the laser scanning module becomes more acquisition angles to institute respectively
It states acquisition position and is scanned acquisition.By the way that in the alternative embodiment, when acquiring the fingerprint of user, user need not rotate hand
Portion can collect the textural characteristics of finger portion all angles, improve user experience.
In another alternative embodiment of the invention, complete a hand each acquisition angles scanning collection it
Afterwards, it also can indicate that laser scanning module is moved to the acquisition position of next hand, next hand be acquired.
In an optional embodiment of the embodiment of the present invention, step S110 may include:To the point cloud data into
Row pretreatment, wherein the pretreatment includes at least one of:Noise reduction process, smoothing processing and visualization processing;From pre-
The characteristic point cloud information of the target object is extracted in treated the point cloud data;According to the characteristic point cloud information, mark
The distance for determining characteristic point obtains the key dimension of the 3D models of the target object.It is described by the alternative embodiment, obtaining
The key dimension of the 3D models of target object, to facilitate step S112 to carry out 3D modeling to target object.
In an alternate embodiment of the present invention where, after step sl 12, this method can also include:To the 3D moulds
Type data are rendered, and the 3D four-dimension model data after rendering, which is sent to display screen, to be shown.In the alternative embodiment,
3D four-dimension model datas are rendered, the validity of the 3D four-dimension model datas of display is improved.
In concrete application, different hardware systems can be set and carry out target object according to the different classes of of target object
3D acquisition.Separately below by taking target object is head face and hand as an example, to the 3D provided in an embodiment of the present invention four-dimension
The hardware realization of collecting method illustrates.
Fig. 2 shows a kind of head face 3D 4 D datas acquisitions based on laser provided according to one embodiment of the invention
The configuration diagram of system, Fig. 3 show that a kind of head face 3D based on laser provided according to one embodiment of the invention is four-dimensional
The modular structure schematic diagram of data collecting system.As shown in Figures 2 and 3, which includes mainly:Pedestal 21, seat 22, head limit
Position device 23, scale 24, support construction 25, bearing structure 26, laser scanning module 27, sliding rail control module 28, face are automatic
Change positioning control module 29 and main control module 20.Wherein, pedestal 21 is connect with seat 22, head limiting device 23 and seat 22
Top connects, and scale 24 is connect with the side of head limiting device 23, and support construction 25 adjusts part with pedestal 21 and automation
It is connected, bearing structure 26 and laser scanning module 27, face automation positioning control module 29, sliding rail control module 28, master control
Molding block 20 is connected with support construction 25.
In one embodiment of the invention, as shown in figure 3, laser scanning module 27 may include:Laser 271, phase
Machine 272 and OIS modules 273.Wherein, laser 271 includes laser communication module 2711 and laser point cloud transmission module 2712.
In the embodiment, laser communication module 2711 is connected with the main control communication module of main control module 20, and laser point cloud transmits mould
Block 2712 is connected with the main control communication module of main control module 20, and camera 272 includes camera communication module 2721 and camera number
According to transmission module 2722, camera communication module 2721 is connected with the main control communication module of main control module 20, and camera data passes
Defeated module 2722 is connected with the major control data transmission module of main control module 20, and laser scanning module 27 is mounted on sliding rail,
Head face for scanning people, exports point cloud data.
In one embodiment of the invention, as shown in figure 3, sliding rail control module 28 may include:Sliding rail transverse direction PLC moulds
Block 281, lateral servo motor 282, horizontal slide rail 283, sliding rail longitudinal P LC modules 284, servo longitudinal motor 285 and longitudinal direction are sliding
Rail 286.In this embodiment, sliding rail transverse direction PLC module 281 is connected with lateral servo motor 282, lateral servo motor 282 and
Horizontal slide rail 283 is connected, and sliding rail longitudinal P LC modules 284 are connected with servo longitudinal motor 285, servo longitudinal motor 285 and longitudinal direction
Sliding rail 286 is connected, and horizontal slide rail 283 is mounted on the inside of bearing structure 26, and longitudinal slide rail 286 is mounted on the outer of bearing structure 26
Side.
In one embodiment of the invention, as shown in figure 3, face automation positioning control module 29 includes:Automation
Lateral PLC module 291, horizontal slide rail control module 292, automation longitudinal P LC modules 293, longitudinal slide rail control module 294,
Automate Face detection module 295 and guiding display screen 296.In this embodiment, lateral PLC module 291 and sliding rail are automated
The sliding rail transverse direction PLC module 281 of control module 28 is connected, horizontal slide rail control module 292 and automation transverse direction PLC module 291
It is connected, automation longitudinal P LC modules 293 are connected with the sliding rail longitudinal P LC modules 284 of sliding rail control module 28, longitudinal slide rail control
Molding block 294 is connected with automation longitudinal P LC modules 293, horizontal slide rail control module 292 and longitudinal slide rail control module 294
All it is connected with automation Face detection module 295, automation Face detection module 295 is connected with guiding display screen 296, and guiding is aobvious
Display screen 296 is mounted on the middle position of 26 inside of bearing structure, and above the horizontal slide rail of sliding rail control module 28 283,
Face automation positioning control module 29 is used to adjust the height of bearing structure 26, makes the head of 296 quasi- people of guiding display screen pair
Face.
In one embodiment of the invention, as shown in figure 3, main control module 20 includes:Main control communication module 201,
Major control data transmission module 202, image data format conversion module 203, master control Face detection module 204,3D model point clouds
Generation module 205, characteristic size demarcating module 206,3D models synthesis module 207,3D models display module 208 and main control are aobvious
Display screen 209.In this embodiment, main control communication module 201 is connected with the camera communication module of camera, major control data transmission
202 input terminal of module is connected with the camera data transmission module output end of camera, 202 output end of major control data transmission module with
The input terminal of image data format conversion module 203 is connected, output end and the master control face of image data format conversion module 203
The input terminal of locating module 204 is connected, 204 output end of master control Face detection module and face automation positioning control module 29
It automates Face detection module 295 to be connected, output end and the characteristic size demarcating module 206 of 3D model point clouds generation module 205
Input terminal be connected, the output end of characteristic size demarcating module 206 is connected with the input terminal of 3D models synthesis module 207,3D moulds
The output end of type synthesis module 207 is connected with the input terminal of 3D models display module 208, the output of 3D models display module 208
End is connected with main control display screen 209.
In one embodiment of the invention, 283 shape of horizontal slide rail can be semicircular arc.So that laser scanning
Module 27 can complete the entire scan of human body face.
Optionally, in one embodiment of the invention, the angular speed of horizontal slide rail control module 292 can be 10-
30rad/s。
Fig. 4 shows the head face image data collection system according to an embodiment of the invention based on laser
Workflow, as shown in figure 4, mainly may comprise steps of S401- steps S407.
Step S401 opens power supply.It is required that scanned people sits on the seat, head is close to head limiting device, opens
Included all moulds in laser scanning module, sliding rail control module, face automation positioning control module, main control module
Block switchs;
The head positioning of step S402, people are adjusted.The camera data transmission module for passing through camera according to camera photographed data
It is transferred to the data transmission module of main control module, the data transmission module of main control module outputs data to image data lattice
Formula conversion module, what the image transmitting generated by the decoding of image data format conversion module was carried out to master control Face detection module
The head facial positions of people, and then control longitudinal P LC modules progress bearing structure height tune by automating Face detection module
Section;So that guiding display screen is parallel to the head face of people, the head face of guiding display screen to people is in guiding display screen
Position is entreated, illustrates that the head face of people is had good positioning.
In an alternate embodiment of the present invention where, the head positioning of people, which is adjusted, may include:(1) longitudinal slide rail height tune
Section, the automation Face detection control of the master control Face detection control module and face automation positioning control module of main control module
Molding block is connected by UART, and the head facial positions control for the people that the image generated according to the decoding of camera photographed data carries out is slided
Rail longitudinal P LC modules carry out bearing structure height adjusting;(2) so that guiding display screen is parallel to the head face of people, guiding is aobvious
Display screen to people head face guide display screen middle position when, adjusting terminates.
Step S403, camera parameter setting.After the head facial positions of people are had good positioning, the shooting focal length of camera is controlled, is made
The head face-image of people is shown completely, clearly.
Step S404, automatically scanning shooting.Start laser scanning module, laser emits laser, and camera automatic time delay is clapped
Take the photograph imaging;The lateral PLC module of horizontal slide rail, lateral PLC module are controlled by the horizontal slide rail control module of main control section
The lateral servo motor of control, the rate of lateral Serve Motor Control horizontal slide rail so that laser scanning module is from the most left of sliding rail
End position is moved along semi arch track to the sliding rail rightmost side, end of scan when reaching the sliding rail rightmost side, laser scanning module
Camera data transmission module output end exports point cloud data.
In an alternate embodiment of the present invention where, automatically scanning, which is shot, may include:Face automates location control mould
The automation transverse direction PLC module of block is connect with sliding rail transverse direction PLC module by SPI, and positioning control module is automated by face
Horizontal slide rail control module control sliding rail control module sliding rail transverse direction PLC module, sliding rail transverse direction PLC module control laterally watches
Take motor, the rate of lateral Serve Motor Control horizontal slide rail so that laser scanning module from the left end position of sliding rail along
Semi arch track is moved to the sliding rail rightmost side, and end of scan when reaching the sliding rail rightmost side, laser emits laser, and camera prolongs automatically
When shooting imaging, the camera data transmission module output end of laser scanning module exports point cloud data.
Step S405, characteristic size calibration.The characteristic point cloud information in point cloud data is extracted, according in characteristic point cloud information
Characteristic point distance calibration.
In an alternate embodiment of the present invention where, characteristic size, which is demarcated, may include:The pretreatment of 3D point cloud will put cloud
In extra redundancy removal, noise in cloud information and extra point cloud are filtered out by SVM K mean algorithms, remained
The available point cloud of lower main body object, in characteristic point cloud information characteristic point distance calibration by characteristic point cloud feature point for calibration away from
From the key dimension of formation object 3D models, all sizes of object can pass through this calibration distance and form all spaces of object
Point apart from size.
Step S406, image data synthesis.The point cloud number that the processing of image data synthesis unit is exported by laser scanning module
According to, then generate 3D model datas.
Step S407,3D model is shown.3D models are exported by image data synthesis unit, 3D model datas are output to display
It is shown on device.
It can thus be seen that a kind of work of facial head image data collecting system based on laser provided by the invention
Principle is:Open power supply:It is required that scanned people sits on the seat, head is close to head limiting device, opens laser scanning mould
Included all module switch in block, sliding rail control module, face automation positioning control module, main control module;Root
The data transmission module of main control module, main control are transferred to by the camera data transmission module of camera according to camera photographed data
The data transmission module of module outputs data to image data format conversion module, passes through image data format conversion module solution
The head facial positions for the people that the image transmitting that code generates is carried out to master control Face detection module, so it is fixed by automating face
Position module control longitudinal P LC modules carry out bearing structure height adjusting;So that guiding display screen is parallel to the head face of people, draw
Display screen is led to the head face of people in the middle position of guiding display screen, illustrates that the head face of people is had good positioning;The head of people
After facial positions are had good positioning, the shooting focal length of camera is controlled, the head face-image of people is made to show completely, clearly;Start laser
Scan module, laser emit laser, the shooting imaging of camera automatic time delay;Pass through the horizontal slide rail control module of main control section
The lateral PLC module of horizontal slide rail, the lateral lateral servo motor of PLC module control are controlled, lateral Serve Motor Control is laterally slided
The rate of rail so that laser scanning module is moved along semi arch track to the sliding rail rightmost side from the left end position of sliding rail, is arrived
Up to the end of scan when sliding rail rightmost side, the camera data transmission module output end of laser scanning module exports point cloud data;Extraction
Characteristic point cloud information in point cloud data, according to characteristic point distance calibration in characteristic point cloud information;At image data synthesis unit
The point cloud data of reason laser scanning module output, then generating 3D model datas has image data synthesis unit to export 3D moulds
Type, 3D model datas are output on display and show.
For the acquisition system of the embodiment of the present invention compared with traditional technology, horizontal slide rail shape is semicircular arc, facilitates adjustment
Horizontal slide rail angular speed, sliding is smooth, and scanning is smooth;The angular speed for controlling sliding rail is 10-30rad/s, adjusts angular speed, can
Realize that the whole laser reflection to head, no dead angle make it shoot the photographic quality higher come, data are easier, and are improved accurate
Exactness;Point cloud data conversion can support the 3D model data formats of output to have mtl, obj at multiple format 3D model datas,
Vtk formats;One or more lasers and one or more cameras can be used in laser scanning module, only need simple operations, are not required to
The professional knowledge of traditional measurement method is wanted, head measurement is realized, has saved time, money, reduced error, it is easy to operate.
Fig. 5 shows a kind of hand 3D 4 D data acquisition systems based on laser provided according to one embodiment of the invention
Configuration diagram, Fig. 6 show according to one embodiment of the invention provide it is a kind of based on laser hand 3D 4 D datas acquisition
The modular structure schematic diagram of system.As it can be seen in figures 5 and 6, the system includes mainly:Cabinet 52, hand placement bit model 53, hand
Model support structure 55, sliding rail 56, laser scanning module 51, sliding rail control module 57, hand automate positioning control module 58
With central control module 54.
Wherein:
Laser scanning module 51 is mounted on 56 inside of sliding rail, and sliding rail 56 is mounted in cabinet 52, hand model support construction
55 are mounted on above cabinet 52, and hand placement bit model 53 is mounted on the middle position above hand model support construction 55, in
Entreat control module 54 to be mounted on the upper surface of cabinet 52, central control module 54 respectively with 56 phase of laser scanning module 51 and sliding rail
Even;
Cabinet 52, for fixing hand model support construction 55, sliding rail 56 and central control module 54;
Hand placement bit model 53 is mounted on the middle position above hand model support construction 55, for visual human's
Hand placement position.
The hand model support construction 55 is mounted on above cabinet 52, for placing hand placement bit model 53;
The sliding rail 56 is mounted in cabinet 52, for making laser scanning module 51 be moved to specified acquisition position;
The laser scanning module 51, as shown in fig. 6, including:Laser 511, camera 512, OIS modules 513;Wherein swash
Light device 511 includes laser communication module 5111 and laser point cloud transmission module 5112, wherein laser communication module 5111 and center
The main control communication module 541 of control module 54 is connected, the master control of laser point cloud transmission module 5112 and central control module 54
Communication module 541 processed is connected;Camera 512 includes camera communication module 5121 and camera data transmission module 5122, camera communication
Module 5121 is connected with the main control communication module 541 of central control module 54, and camera data transmission module 5122 is controlled with center
The major control data transmission module 542 of molding block 54 is connected;Laser scanning module 51 is mounted in sliding rail control module 57, is used for
The hand for scanning people, exports the hand point cloud data of people.
In an optional embodiment of the embodiment of the present invention, the lens focus model of the camera 512 of laser scanning module 51
Enclose for:4.5-108mm realizing 24 times.
In an optional embodiment of the embodiment of the present invention, camera communication module 5121 and main control communication module
541 transmission may be high-frequency signal, in such a case, it is possible to increase electromagnetic screen to improve anti-interference ability.
Sliding rail control module 57, including motor drive module 571 and servo motor 572, motor drive module 571 and servo
Motor 572 is connected, and sliding rail 56 is mounted in cabinet 52;Sliding rail control module 57 is connected with sliding rail 56;Motor drive module 571
For driving servo motor 572 to rotate, the rotating speed of control servomotor 572 and steering;
Hand automates positioning control module 58, including automation PLC module 581, automation control module 582 and automatic
Change hand locating module 583;Wherein, automation 581 output end of PLC module passes through in UART interface and sliding rail control module 57
571 input terminal of motor drive module is connected, and automation control module 582 is connected by SPI interface with automation PLC module 581,
Automation hand locating module 583 is connected by I2C interface with automation control module 582.
In one embodiment of the invention, central control module 54 may include:Main control communication module 541, master control
Data transmission module 542 processed, hand point cloud pretreatment module 543, master control hand locating module 544, the life of hand 3D model point clouds
At module 545, hand-characteristic demarcating module 546, hand 3D models synthesis module 547, hand 3D models display module 548, master
Control display screen 549;Wherein main control communication module 541 is connected with the camera communication module 5121 of camera 512, major control data
542 input terminal of transmission module is connected with 5122 output end of camera data transmission module of camera 512 by USB interface, main control
542 output end of data transmission module is connected with the input terminal of hand point cloud pretreatment module 543, hand point cloud pretreatment module
543 output end is connected with the input terminal of master control hand locating module 544,544 output end of master control hand locating module and hand
The automation hand locating module for automating positioning control module 58 is connected, the output of hand 3D model point clouds generation module 545
End is connected with the input terminal of hand-characteristic demarcating module 546, and output end and the hand 3D models of hand-characteristic demarcating module 546 close
It is connected at the input terminal of module 547, output end and the hand 3D models display module 548 of hand 3D models synthesis module 547
Input terminal is connected, and the output end of hand 3D models display module 548 is connected with main control display screen 549.
In the optional embodiment of the present invention, it can be integrated with gear in sliding rail control module 57, pass through gear
It is driven.
The form of rack may be used in an optional embodiment of the embodiment of the present invention, 56 overall structure of sliding rail.
In the optional embodiment of the present invention, motor drive module 571 is for receiving automation PLC module 581
The pulse signal of output, motor drive module 571 often receive the servo that a pulse signal just drives sliding rail control module 57
Motor 572 rotates a fixed angle by the direction of setting.Optionally, can also be arranged in one in sliding rail control module 57
Portion's electric power electronic module, the power that the pulse signal for receiving motor drive module 571 is converted into servo motor 572 drive
Dynamic signal in a particular application can be by controlling the number of the pulse signal exported come the angular displacement of control servomotor 572
Amount.Optionally, sliding rail control module 57 can also include a servo motor driving subdivision module 573, for executing subdivision work(
Can, subdivision makes the phase current of servo motor winding slowly varying, can directly improve vibration and noise when motor operation, improves
The stability of motor operation.
In one embodiment of the invention, laser scanning module 51 may be mounted in sliding rail control module 57, such as Fig. 7
It is shown, servo motor 572 and gear 573 are integrated in sliding rail control module 57, sliding rail 56 passes through center using the form of rack
Control module 54 sends out instruction to drive servo motor 572 to rotate, and servo motor 572 is rotated with moving gear, and entire laser is made to sweep
It retouches rack of the module 51 together on sliding rail 56 to move, mobile specific speed and fixed point mechanism are turned by servo motor 572
Dynamic step number controls.
In one embodiment of the invention, the rotation of servo motor 572 is so that the laser on its sliding rail 56 is swept
It retouches module 51 and adjusts to each 3D hands the position for specifying collection point, target use can disposably be acquired by overturning hand without picker
The 3D finger print informations at family, and Fast Construction goes out the 3D hand models on hand surface.
Fig. 8 shows the work of the hand 3D 4 D data acquisition systems according to an embodiment of the invention based on laser
Make flow, as shown in figure 8, mainly may comprise steps of S801- steps S810.
Step S801 opens power supply.It is required that the palm of scanned people is placed on hand placement bit model 53, laser is opened
Being wrapped in scan module 51, sliding rail control module 57, hand automation positioning control module 58 and central control module 54
All module power switch included;
Step S802, module initialization setting.Initialize the interruption of central control module 54, timer, PWM outputs,
UART interface, SPI interface, I2C interface, USB interface set the default parameters of laser 511, and 512 default parameters of camera is arranged,
The default parameters of hand automation positioning control module 58 is set, and setting servo motor gives tacit consent to rotating speed;
Step S803 automates hand location control.Laser 511 emits laser beam, on laser beam projects to hand, instead
It is mapped on camera 512, central control module 54 receives the data of the transmission of camera 512, is transferred to hand point cloud pretreatment module
543, output is to master control hand locating module 544, master control hand positioning mould after hand point cloud pretreatment module 543 is pre-processed
Block 544 judges whether to be positioned, and sends out corresponding instruction and be transmitted to automation hand locating module 583, automates hand
Portion's locating module 583 positions hand position, transmits commands to automation PLC module 581, automates 581 send instructions of PLC module
To the motor drive module 571 of sliding rail control module 57,571 control servomotor 572 of motor drive module is corresponding to adjust
The gear of sliding rail control module 57 slided on the rack of sliding rail 56, to realize the mounting base side to laser scanning module 51
The adjustment of position;Camera 512 shoots hand images data, is transferred to central control module 54, the dominant hand of central control module 54
Acquisition position is identified in portion's locating module 544, when for hand, starts to carry out image data acquiring to one hand;
Step S804, singlehanded image data acquiring.Master control hand locating module 544 to acquisition position be hand when, start
A singlehanded angular position information is acquired, motor driving subdivision 573 control servomotor of module to be mounted on its mounting base
On laser 511 and camera 512 adjust to the position of another texture collection angle, complete same texture collection step
Afterwards, the concave-convex fingerprint image of another singlehanded angle is obtained;Repeatedly with this, after the hand texture collection by multiple angles,
The Z-Correct bump mapping Z-correct image of singlehanded all angles is collected;
Step S805, both hands image data acquiring.According to both hands fixed position, step S804, laser scanning module are repeated
51 are scanned acquisition to all angles of both hands respectively;
Step S806, hand point cloud pretreatment.Hand point cloud pretreatment module 543 to the scanning point cloud data that receives into
Row noise reduction, smoothly, visualization processing;
Step S807, point cloud data generate.The output of hand point cloud pretreatment module 543 scans point cloud data to hand 3D types
Point cloud generation module 545, hand 3D model point clouds generation module 545 generates hand 3D module point cloud datas.
Step S808, hand-characteristic calibration.Hand 3D model point clouds generation module 545 exports 3D point cloud data to hand spy
Demarcating module 546 is levied, hand-characteristic demarcating module 546 extracts the characteristic point cloud information in point cloud data, believes according to characteristic point cloud
Characteristic point carries out image analysis and carries out distance calibration in breath, obtains the concave-convex fingerprint image of an angle of hand;
Step S809, hand images Data Synthesis.The processing of hand images Data Synthesis module 547 is by laser scanning module 51
The point cloud data of output then generates hand 3D model datas, and Fast Construction goes out hand surface after image merging treatment
3D Fingerprint Models complete hand 3D texture collections;
Step S810, hand 3D models are shown.Hand 3D models are exported to hand by hand images Data Synthesis module 547
3D models display module 548, hand 3D models display module 548 control hand 3D model datas and are shown in master control display screen 549
On.
The present invention is directed to conventionally employed contact acquisition mode, obtains 2D finger print datas, and the pressure applied and hand are dry
Humidity influence will produce the case where torsional deformation or clarity decline, and preceding primary acquisition can leave ghost, sometimes same hand
It need to acquire repeatedly, the problem more demanding to user's operation is taken pictures using laser using contactless acquisition mode, acquired simultaneously
The image data of both hands establishes hand 3D models, not to hand dry and wet to collecting after image data does noise reduction and enhancing processing
Residual marbleizing effect is eliminated in deformation that is sensitive, avoiding pressing generation and distortion, promotes the speed and precision of acquisition.
It should be noted that in practical application, combination may be used in above-mentioned all optional embodiments arbitrary group of mode
It closes, forms the alternative embodiment of the present invention, this is no longer going to repeat them.
Based on the 3D 4 D data acquisition methods based on laser that each embodiment provides above, it is based on same invention structure
Think, the embodiment of the present invention additionally provides a kind of 3D 4 D data harvesters based on laser.
Fig. 9 shows the structural representation of the 3D 4 D data harvesters according to an embodiment of the invention based on laser
Figure.As shown in figure 9, the device may include:Image data acquisition module 910, locating module 920, mobile control module 930,
Point cloud generation module 940, distance calibration module 950 and 3D synthesis modules 960.
In a particular application, the 3D 4 D datas harvester provided in an embodiment of the present invention based on laser can be used as figure
The part of the central control module 54 in main control module 20 and Fig. 5-8 in 2-4.
Now introduce each composition of the 3D 4 D data harvesters based on laser of the embodiment of the present invention or the function of device
And the connection relation between each section:
Image data acquisition module 910, the laser transmitting of the laser scanning module for obtaining current acquisition position
Laser beam projects are reflected into target object on the camera of the laser scanning module, the object that the camera takes
The image data of body;
Locating module 920 is positioned for the image data to the target object, whether judges the target object
In scheduled position;
Mobile control module 930, for judging the target object in the case of scheduled position, successively to described
Laser scanning module sends multiple moves, indicates the laser scanning module change acquisition position;
Point cloud generation module 940, the mesh taken in each acquisition position being altered to for obtaining the camera
The image data for marking object, according to the image data for the target object that each acquisition position takes, described in generation
The point cloud data of target object;
Distance calibration module 950, the feature for extracting the target object from the point cloud data of the target object
Point cloud information, and according to the characteristic point cloud information of extraction, carry out characteristic point distance calibration;
3D synthesis modules 960, the calibration distance for being obtained based on the characteristic point distance calibration, to the point cloud data
It is synthesized, obtains the 3D four-dimension model datas of the target object.
In an alternate embodiment of the present invention where, mobile control module 930, is additionally operable to judging the target object not
In the case of scheduled position, according to the image data to the target object positioned as a result, determine carrying described in
The load bearing equipment of target object needs mobile direction, and sends control instruction to the load bearing equipment, indicates that the carrying is set
It is standby to need mobile direction to move to described, then trigger described image data acquisition module 910.
In an alternate embodiment of the present invention where, the target object includes:The face of human body and/or head.Then institute
Whether state locating module 920 judges the target object in scheduled position in the following manner:To the figure of the target object
As data are identified, judge human body described in the image data of the target object face and/head profile whether
Completely, if completely, it is determined that the target object is in scheduled position.
In an alternate embodiment of the present invention where, which further includes:Display module is guided, for clapping the camera
The 3D 4 D datas for the target object taken the photograph are sent to guiding display screen and show.
In an alternate embodiment of the present invention where, the target object includes:The hand of human body, the hand of the human body
Including:Finger portion and/or metacarpus;Whether the locating module 920 judges the target object in scheduled position in the following manner
It sets:Current acquisition position is identified according to the image data of the target object, judges the current acquisition
Whether position is hand, if it is, determining the target object in scheduled position.
In an alternate embodiment of the present invention where, mobile control module 930 is in the following way successively to the laser
Scan module sends multiple moves:Multiple moves are sent to the laser scanning module successively, indicate the laser
Scan module becomes more acquisition angles and is scanned acquisition to the acquisition position respectively.
In an alternate embodiment of the present invention where, distance calibration module 950 carries out distance calibration in the following manner:
The point cloud data scanned from each acquisition position is pre-processed respectively, wherein the pretreatment packet
Include at least one of:Noise reduction process, smoothing processing and visualization processing;
The characteristic point cloud information of the target object is extracted from pretreated each point cloud data respectively;
According to the characteristic point cloud information, the distance of feature point for calibration obtains tetra- dimension modules of 3D of the target object
Key dimension.
According to the combination of any one above-mentioned alternative embodiment or multiple alternative embodiments, the embodiment of the present invention can reach
Following advantageous effect:
An embodiment of the present invention provides a kind of 3D 4 D datas acquisition method and device based on laser, is obtained in the method
It takes on laser transmitting laser beam projects to target object back reflection to camera, the picture number for the target object that camera takes
According to positioning to image data, determining target object in the case of scheduled position, repeatedly sent out to laser scanning module
Send move, instruction laser scanning module changes acquisition position, is taken in multiple acquisition positions to getting camera
Image data carries out integration modeling, to obtain the 3D models of target object to the image data that multiple acquisition positions take
Data complete the reconstruction of target object.Since one or more lasers and one or more can be used in laser scanning module
Camera, and overall reflective, no dead angle are carried out to target object by laser so that camera shoots the photographic quality come more
Height improves accuracy so that data are easier.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention
Example can be put into practice without these specific details.In some instances, well known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each inventive aspect,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:It is i.e. required to protect
Shield the present invention claims the more features of feature than being expressly recited in each claim.More precisely, as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following specific implementation mode are expressly incorporated in the specific implementation mode, wherein each claim itself
All as a separate embodiment of the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptively the module in the equipment in embodiment
Change and they are arranged in the one or more equipment different from the embodiment.It can be the module or list in embodiment
Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it may be used any
Combination is disclosed to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so to appoint
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power
Profit requires, abstract and attached drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of arbitrary
It mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization, or to run on one or more processors
Software module realize, or realized with combination thereof.It will be understood by those of skill in the art that can use in practice
Microprocessor or digital signal processor (DSP) realize that the 3D 4 D datas according to the ... of the embodiment of the present invention based on laser are adopted
The some or all functions of some or all components in acquisition means.The present invention is also implemented as executing institute here
Some or all equipment or program of device of the method for description are (for example, computer program and computer program production
Product).It is such to realize that the program of the present invention may be stored on the computer-readable medium, or can have one or more
The form of signal.Such signal can be downloaded from internet website and be obtained, and either be provided on carrier signal or to appoint
What other forms provides.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference mark between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real
It is existing.In the unit claims listing several devices, several in these devices can be by the same hardware branch
To embody.The use of word first, second, and third does not indicate that any sequence.These words can be explained and be run after fame
Claim.
So far, although those skilled in the art will appreciate that present invention has been shown and described in detail herein multiple shows
Example property embodiment still without departing from the spirit and scope of the present invention, still can according to the present disclosure directly
Determine or derive many other variations or modifications consistent with the principles of the invention.Therefore, the scope of the present invention is understood that and recognizes
It is set to and covers other all these variations or modifications.
Claims (10)
1. a kind of 3D 4 D data acquisition methods based on laser, including:
Step 1, the laser beam projects for obtaining the laser transmitting of the laser scanning module of current acquisition position are anti-to target object
It is mapped on the camera of the laser scanning module, the image data for the target object that the camera takes;
Step 2, the image data of the target object is positioned, judges the target object whether in scheduled position;
Step 3, it judging the target object in the case of scheduled position, is sent successively to the laser scanning module more
A move indicates the laser scanning module change acquisition position;
Step 4, the image data for the target object that the camera takes in each acquisition position being altered to, root are obtained
According to the image data for the target object that each acquisition position takes, the point cloud data of the target object is generated;
Step 5, the characteristic point cloud information of the target object is extracted from the point cloud data of the target object, and according to extraction
The characteristic point cloud information, carry out characteristic point distance calibration;
Step 6, the calibration distance obtained based on the characteristic point distance calibration, synthesizes the point cloud data, obtains institute
State the 3D four-dimension model datas of target object.
2. according to the method described in claim 1, wherein, judging the target object not in the case of scheduled position,
The method further includes:
According to the image data to the target object positioned as a result, determining the load bearing equipment for carrying the target object
Need mobile direction;
Control instruction is sent to the load bearing equipment, indicates that the load bearing equipment needs mobile direction to move to described, returns
Step 1.
3. method according to claim 1 or 2, wherein the target object includes:The face of human body and/or head.
4. according to the method described in claim 3, wherein, in step 2 judging the target object whether in scheduled position, packet
It includes:The image data of the target object is identified, judges human body described in the image data of the target object
Whether face and the profile on/head are complete, if completely, it is determined that the target object is in scheduled position.
5. according to the method described in claim 3, wherein, after step 1, the method further includes:The camera is shot
To the target object image data be sent to guiding display screen show.
6. method according to claim 1 or 2, wherein the target object includes:The hand of human body.
7. according to the method described in claim 6, wherein, the hand of the human body includes:Finger portion and/or metacarpus.
8. according to the method described in claim 7, wherein, in step 2 judging the target object whether in scheduled position, packet
It includes:Current acquisition position is identified according to the image data of the target object, judges the current acquisition
Whether position is hand, if it is, determining the target object in scheduled position.
9. according to the method described in claim 8, wherein, sending multiple moves to the laser scanning module successively, referring to
Show the laser scanning module change acquisition position, including:Multiple moves are sent to the laser scanning module successively, are referred to
Show that the laser scanning module becomes more acquisition angles and is scanned acquisition to the acquisition position respectively;Step 5 packet
It includes:
The point cloud data scanned from each acquisition position is pre-processed respectively, wherein it is described pretreatment include with
It is at least one lower:Noise reduction process, smoothing processing and visualization processing;
The characteristic point cloud information of the target object is extracted from pretreated each point cloud data respectively;
According to the characteristic point cloud information, the distance of feature point for calibration obtains the basis of tetra- dimension modules of 3D of the target object
Size.
10. a kind of 3D 4 D data harvesters based on laser, including:
The laser beam of image data acquisition module, the laser transmitting of the laser scanning module for obtaining current acquisition position is thrown
It is mapped to target object to be reflected on the camera of the laser scanning module, the image for the target object that the camera takes
Data;
Whether locating module is positioned for the image data to the target object, judge the target object predetermined
Position;
Mobile control module, for judging that the target object in the case of scheduled position, sweeps to the laser successively
It retouches module and sends multiple moves, indicate the laser scanning module change acquisition position;
Point cloud generation module, the target object taken in each acquisition position being altered to for obtaining the camera
Image data generates the target object according to the image data for the target object that each acquisition position takes
Point cloud data;
Distance calibration module, the characteristic point cloud for extracting the target object from the point cloud data of the target object are believed
Breath, and according to the characteristic point cloud information of extraction, carry out characteristic point distance calibration;
3D synthesis modules 960, the calibration distance for being obtained based on the characteristic point distance calibration carry out the point cloud data
Synthesis, obtains the 3D four-dimension model datas of the target object;
Mobile control module is additionally operable to judging the target object not in the case of scheduled position, according to the mesh
It is that the image data of mark object is positioned as a result, determine that the load bearing equipment for carrying the target object needs mobile direction,
And control instruction is sent to the load bearing equipment, it indicates that the load bearing equipment needs mobile direction to move to described, then touches
Send out described image data acquisition module;The target object includes:The face of human body and/or head;
Whether the locating module judges the target object in scheduled position in the following manner:To the target object
Image data is identified, and the profile of the face and/head that judge human body described in the image data of the target object is
It is no complete, if completely, it is determined that the target object is in scheduled position;Further include:Display module is guided, being used for will be described
The image data for the target object that camera takes is sent to guiding display screen and shows;The target object includes:Human body
Hand, the hand of the human body includes:Finger portion and/or metacarpus;
Whether the locating module judges the target object in scheduled position in the following manner:According to the target object
Image data current acquisition position is identified, judge whether the current acquisition position is hand, if
It is, it is determined that the target object is in scheduled position;The mobile control module is in the following way successively to the laser
Scan module sends multiple moves:Multiple moves are sent to the laser scanning module successively, indicate the laser
Scan module becomes more acquisition angles and is scanned acquisition to the acquisition position respectively;The distance calibration module by with
Under type carries out distance calibration:
The point cloud data scanned from each acquisition position is pre-processed respectively, wherein it is described pretreatment include with
It is at least one lower:Noise reduction process, smoothing processing and visualization processing;
The characteristic point cloud information of the target object is extracted from pretreated each point cloud data respectively;According to described
Characteristic point cloud information, the distance of feature point for calibration obtain the key dimension of tetra- dimension modules of 3D of the target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810152236.5A CN108492357A (en) | 2018-02-14 | 2018-02-14 | A kind of 3D 4 D datas acquisition method and device based on laser |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810152236.5A CN108492357A (en) | 2018-02-14 | 2018-02-14 | A kind of 3D 4 D datas acquisition method and device based on laser |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108492357A true CN108492357A (en) | 2018-09-04 |
Family
ID=63340779
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810152236.5A Withdrawn CN108492357A (en) | 2018-02-14 | 2018-02-14 | A kind of 3D 4 D datas acquisition method and device based on laser |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108492357A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109443199A (en) * | 2018-10-18 | 2019-03-08 | 天目爱视(北京)科技有限公司 | 3D information measuring system based on intelligent light source |
CN110827196A (en) * | 2018-09-05 | 2020-02-21 | 天目爱视(北京)科技有限公司 | Device capable of simultaneously acquiring 3D information of multiple regions of target object |
CN111160136A (en) * | 2019-12-12 | 2020-05-15 | 天目爱视(北京)科技有限公司 | Standardized 3D information acquisition and measurement method and system |
CN112784802A (en) * | 2021-02-03 | 2021-05-11 | 成都多极子科技有限公司 | Palm print recognition system and method based on laser scanning three-dimensional point cloud |
CN115097976A (en) * | 2022-07-13 | 2022-09-23 | 北京有竹居网络技术有限公司 | Method, apparatus, device and storage medium for image processing |
CN116577350A (en) * | 2023-07-13 | 2023-08-11 | 北京航空航天大学杭州创新研究院 | Material surface hair bulb point cloud acquisition device and material surface hair bulb data acquisition method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104021588A (en) * | 2014-06-18 | 2014-09-03 | 公安部第三研究所 | System and method for recovering three-dimensional true vehicle model in real time |
CN105427385A (en) * | 2015-12-07 | 2016-03-23 | 华中科技大学 | High-fidelity face three-dimensional reconstruction method based on multilevel deformation model |
US20170191826A1 (en) * | 2016-01-05 | 2017-07-06 | Texas Instruments Incorporated | Ground Plane Estimation in a Computer Vision System |
CN107330976A (en) * | 2017-06-01 | 2017-11-07 | 北京大学第三医院 | A kind of human body head three-dimensional modeling apparatus and application method |
CN107392845A (en) * | 2017-07-31 | 2017-11-24 | 芜湖微云机器人有限公司 | A kind of method of 3D point cloud imaging and positioning |
US10013803B2 (en) * | 2014-09-30 | 2018-07-03 | Fitfully Ltd. | System and method of 3D modeling and virtual fitting of 3D objects |
-
2018
- 2018-02-14 CN CN201810152236.5A patent/CN108492357A/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104021588A (en) * | 2014-06-18 | 2014-09-03 | 公安部第三研究所 | System and method for recovering three-dimensional true vehicle model in real time |
US10013803B2 (en) * | 2014-09-30 | 2018-07-03 | Fitfully Ltd. | System and method of 3D modeling and virtual fitting of 3D objects |
CN105427385A (en) * | 2015-12-07 | 2016-03-23 | 华中科技大学 | High-fidelity face three-dimensional reconstruction method based on multilevel deformation model |
US20170191826A1 (en) * | 2016-01-05 | 2017-07-06 | Texas Instruments Incorporated | Ground Plane Estimation in a Computer Vision System |
CN107330976A (en) * | 2017-06-01 | 2017-11-07 | 北京大学第三医院 | A kind of human body head three-dimensional modeling apparatus and application method |
CN107392845A (en) * | 2017-07-31 | 2017-11-24 | 芜湖微云机器人有限公司 | A kind of method of 3D point cloud imaging and positioning |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110827196A (en) * | 2018-09-05 | 2020-02-21 | 天目爱视(北京)科技有限公司 | Device capable of simultaneously acquiring 3D information of multiple regions of target object |
CN109443199A (en) * | 2018-10-18 | 2019-03-08 | 天目爱视(北京)科技有限公司 | 3D information measuring system based on intelligent light source |
CN109443199B (en) * | 2018-10-18 | 2019-10-22 | 天目爱视(北京)科技有限公司 | 3D information measuring system based on intelligent light source |
CN110567371A (en) * | 2018-10-18 | 2019-12-13 | 天目爱视(北京)科技有限公司 | Illumination control system for 3D information acquisition |
CN111160136A (en) * | 2019-12-12 | 2020-05-15 | 天目爱视(北京)科技有限公司 | Standardized 3D information acquisition and measurement method and system |
CN111160136B (en) * | 2019-12-12 | 2021-03-12 | 天目爱视(北京)科技有限公司 | Standardized 3D information acquisition and measurement method and system |
CN113065502A (en) * | 2019-12-12 | 2021-07-02 | 天目爱视(北京)科技有限公司 | 3D information acquisition system based on standardized setting |
CN112784802A (en) * | 2021-02-03 | 2021-05-11 | 成都多极子科技有限公司 | Palm print recognition system and method based on laser scanning three-dimensional point cloud |
CN112784802B (en) * | 2021-02-03 | 2024-04-09 | 成都多极子科技有限公司 | Palmprint recognition system and palmprint recognition method based on laser scanning three-dimensional point cloud |
CN115097976A (en) * | 2022-07-13 | 2022-09-23 | 北京有竹居网络技术有限公司 | Method, apparatus, device and storage medium for image processing |
CN115097976B (en) * | 2022-07-13 | 2024-03-29 | 北京有竹居网络技术有限公司 | Method, apparatus, device and storage medium for image processing |
CN116577350A (en) * | 2023-07-13 | 2023-08-11 | 北京航空航天大学杭州创新研究院 | Material surface hair bulb point cloud acquisition device and material surface hair bulb data acquisition method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108492357A (en) | A kind of 3D 4 D datas acquisition method and device based on laser | |
CN109035379B (en) | A kind of 360 ° of 3D measurements of object and information acquisition device | |
CN111060023B (en) | High-precision 3D information acquisition equipment and method | |
CN110543871B (en) | Point cloud-based 3D comparison measurement method | |
CN105141939B (en) | Three-dimensional depth perception method and three-dimensional depth perception device based on adjustable working range | |
CN109443199B (en) | 3D information measuring system based on intelligent light source | |
CN108470373B (en) | It is a kind of based on infrared 3D 4 D data acquisition method and device | |
CN208653402U (en) | Image acquisition equipment, 3D information comparison device, mating object generating means | |
CN109394168B (en) | A kind of iris information measuring system based on light control | |
CN109141240B (en) | A kind of measurement of adaptive 3 D and information acquisition device | |
CN109146961B (en) | 3D measures and acquisition device based on virtual matrix | |
CN109766876A (en) | Contactless fingerprint acquisition device and method | |
CN109285109B (en) | A kind of multizone 3D measurement and information acquisition device | |
CN111006586B (en) | Intelligent control method for 3D information acquisition | |
CN111060008B (en) | 3D intelligent vision equipment | |
CN108470149A (en) | A kind of 3D 4 D datas acquisition method and device based on light-field camera | |
CN209279885U (en) | Image capture device, 3D information comparison and mating object generating means | |
CN211178345U (en) | Three-dimensional acquisition equipment | |
CN108550184A (en) | A kind of biological characteristic 3D 4 D datas recognition methods and system based on light-field camera | |
CN111445528A (en) | Multi-camera common calibration method in 3D modeling | |
CN109084679B (en) | A kind of 3D measurement and acquisition device based on spatial light modulator | |
CN209401042U (en) | Contactless fingerprint acquisition device | |
CN109394170B (en) | A kind of iris information measuring system of no-reflection | |
CN209103318U (en) | A kind of iris shape measurement system based on illumination | |
CN103269413A (en) | Multi-source video fusion system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20180904 |