CN109218703A - Data processing equipment and method based on 3D video camera - Google Patents
Data processing equipment and method based on 3D video camera Download PDFInfo
- Publication number
- CN109218703A CN109218703A CN201811008563.XA CN201811008563A CN109218703A CN 109218703 A CN109218703 A CN 109218703A CN 201811008563 A CN201811008563 A CN 201811008563A CN 109218703 A CN109218703 A CN 109218703A
- Authority
- CN
- China
- Prior art keywords
- module
- depth perception
- initial image
- coding
- data processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses a kind of data processing equipment and method based on 3D video camera, the data processing equipment include a 3D video camera and a processing end, and the processing end includes one obtaining module, and the 3D video camera is used to obtain the initial image of photographic subjects;The acquisition module is encoded for obtaining depth perception corresponding with the initial image, equipped with relational expression between two target code points on the depth perception coding.Of the invention data processing equipment and method based on 3D video camera can obtain the digital spot cloud more standardized, the 3D image obtained is set to be more easier management, control, and it can reduce operation consumed resource, when the initial image capturing perceptual coding, point cloud through overcompression, simplify, reduce 3D image taken up space.
Description
Technical field
The present invention relates to a kind of data processing equipment and method based on 3D video camera.
Background technique
3D video camera, what is utilized is the video camera of 3D camera lens manufacture, usually there are two tools more than pick-up lens, spacing and people
Eye spacing is close, can shoot the similar seen different images for being directed to Same Scene of human eye.Holographic 3D has 5 camera lens of disk
More than, by dot grating image Huo Ling shape raster holographic imaging can the comprehensive same image of viewing, can such as come to its border personally.
The 3D revolution so far of First 3D video camera is unfolded all around Hollywood weight pound sheet and important competitive sports.With
The appearance of 3D video camera, this technology distance domestic consumer close step again.After the release of this video camera, we are from now on
Each unforgettable moment of life, such as the first step that child steps, celebration of graduating from university etc. can be captured with 3D camera lens.
Usually there are two the above camera lenses for 3D video camera.The function of 3D video camera itself, can be by two just as human brain
Lens image is fused together, and becomes a 3D rendering.These images can play on 3D TV, and spectators wear so-called master
Dynamic formula shutter glasses may be viewed by, and can also pass through naked eye 3D display equipment direct viewing.3D shutter glasses can be with per second 60
Secondary speed enables the eyeglass fast crosstalk of left and right glasses switch.This means that each eye is it is seen that Same Scene is slightly shown not
Same picture, so brain can be thus to be the single photo presented in appreciation with 3D for it.
The image that existing 3D video camera obtains does not allow easy to handle, control, and 3D image takes up space biggish defect.
Summary of the invention
The technical problem to be solved by the present invention is to be not easy to locate for the image for overcoming 3D video camera in the prior art to obtain
Reason, control, 3D image take up space biggish defect, provide it is a kind of can obtain the digital spot cloud more standardized, make acquisition
3D image is more easier management, control, and can reduce operation consumed resource, and what reduction 3D image was taken up space is taken the photograph based on 3D
The data processing equipment and method of camera.
The present invention is to solve above-mentioned technical problem by following technical proposals:
A kind of data processing equipment based on 3D video camera, it is characterized in that, the data processing equipment includes that a 3D takes the photograph
Camera and a processing end, the processing end include an acquisition module,
The 3D video camera is used to obtain the initial image of photographic subjects;
The acquisition module is encoded for obtaining depth perception corresponding with the initial image, and the depth perception encodes
On two target code points between be equipped with relational expression.
The digital spot cloud that the depth perception, which is encoded to, can edit, optimizes and simplify, and perceptual coding can be
The digital spot cloud pre-set, each of perceptual coding digital point can have between the label and digital point of oneself
There are certain conduct the relation.
Preferably, the processing end further includes a matching module and a processing module, the processing end includes several depths
Perceptual coding is spent, is equipped with relational expression between two target code points on each depth perception coding,
The matching module is for obtaining and the target depth perceptual coding of the initial Image Matching;
The processing module is used to adjust the parameter of the target depth perceptual coding according to the initial image.
Preferably, module and a computing module are chosen including a presetting module, one in the processing end,
The presetting module is used to obtain accurate image by industrial 3D video camera;
The module of choosing in the digital spot cloud of accurate image for choosing the digital point set of preset quantity;
The computing module is using the digital point in the digital point set as discrete point retrieval number point set
NURBS relational expression;
The processing end is used to store the digital point set with NURBS relational expression as depth perception coding.
Preferably, the depth perception coding includes pixel layer and structure sheaf, the depth perception coding is equipped with several
For the control point of control structure layer shape, the processing module be used for according to the initial image shape adjustment control point with
Adjust the parameter of the target depth perceptual coding.
Preferably, the processing end includes a placement module,
The placement module is used to place target depth perceptual coding and initial image overlap to obtain target depth sense
Know the distance for encoding upper control point to initial image;
The processing module be also used to obtain it is described apart from maximum control point be target control point, and by the target control
Point is made to the mobile distance of initial image direction;
The processing module is also used to the surrounding control point around target control point is mobile to initial image direction
Distance is adjusted, the adjustment at each surrounding control point is inversely proportional at a distance from surrounding control point to target control point apart from size, institute
It states adjustment distance and is less than target control point moving distance.
Preferably, the processing end includes a placement module,
The placement module is used to initial image overlap place depth perception coding to obtain depth perception coding
Distance of each digital point to initial image;
One depth perception is encoded, the matching module is used to encode depth perception the distance phase of each digital point
Whole matching value is obtained, the smallest depth perception of whole matching value numerical value is encoded to the target depth perception and compiles
Code.
Preferably, the processing end further includes a division module,
The division module, which is used to encode depth perception, is divided into several regions according to face;
The placement module is used to obtain the upper each digital point of depth perception coding to the distance of initial image;
For each region in depth perception coding, it is described obtain module be used for by digital point each in region away from
From being added to obtain Region Matching value, whole region matching value is added by specific gravity corresponding to face to obtain described whole
With value.
The present invention also provides a kind of data processing methods based on 3D video camera, it is characterized in that, the data processing side
Method is encoded by the depth perception that data processing equipment as described above obtains initial image.
On the basis of common knowledge of the art, above-mentioned each optimum condition, can any combination to get each preferable reality of the present invention
Example.
The positive effect of the present invention is that:
Of the invention data processing equipment and method based on 3D video camera can obtain the digital spot cloud more standardized, make
The 3D image of acquisition is more easier management, control, and can reduce operation consumed resource, reduces 3D image and is taken up space.
Detailed description of the invention
Fig. 1 is the flow chart of the data processing method of the embodiment of the present invention 1.
Specific embodiment
The present invention is further illustrated below by the mode of embodiment, but does not therefore limit the present invention to the reality
It applies among a range.
Embodiment 1
The present embodiment provides a kind of data processing equipment based on 3D video camera, the data processing equipment includes that a 3D takes the photograph
Camera and a processing end.
The processing end includes an acquisition module, a matching module, a processing module, a presetting module, a selection module
An and computing module.
The 3D video camera is used to obtain the initial image of photographic subjects.In the present embodiment, the initial image includes
3D point cloud and RGB image.
Initial image is coarse point, can not be controlled to and there are many numbers that noise, the present embodiment pass through specification above
Word image, using the study of artificial intelligence, by initial Imaging Profile.
The acquisition module is encoded for obtaining depth perception corresponding with the initial image, and the depth perception encodes
On two target code points between be equipped with relational expression.
The depth perception is encoded to the digital spot cloud of optimization, and the present invention utilizes initial 3D point cloud and RGB image energy
The digitized image more standardized is enough obtained by artificial intelligence.
Obtain relational expression method particularly includes:
The processing end includes that several depth perceptions encode, between two target code points on each depth perception coding
Equipped with relational expression,
The matching module is for obtaining and the target depth perceptual coding of the initial Image Matching;
The processing module is used to adjust the parameter of the target depth perceptual coding according to the initial image.
And the relational expression in the depth perception coding prestored is realized by a under type:
The presetting module is used to obtain accurate image by industrial 3D video camera;
The module of choosing in the digital spot cloud of accurate image for choosing the digital point set of preset quantity;
The computing module is using the digital point in the digital point set as discrete point retrieval number point set
NURBS relational expression;
The processing end is used to store the digital point set with NURBS relational expression as depth perception coding.
In general, the application obtains high-precision digital spot cloud using high-precision industrial 3D video camera, then exists
Relational expression is added on digital spot cloud, and the 3D video camera that civilian machine obtains being capable of quick obtaining using high-precision digital spot cloud
The depth perception of photographic subjects with relational expression encodes, user-friendly.
The matching of depth perception coding and initial image is accomplished by the following way in the present embodiment.
The depth perception coding includes pixel layer and structure sheaf, and the structure sheaf is digital point cloud layer.The sense of depth
Know that coding is equipped with several control points for control structure layer shape, the processing module is used for according to the initial image
Shape adjustment control point is to adjust the parameter of the target depth perceptual coding.
Specifically, the processing end further includes a placement module.
Control point is overlapped with the position of digital point in the application.
The placement module is used to place target depth perceptual coding and initial image overlap to obtain target depth sense
Know the distance for encoding upper control point to initial image;
The processing module be also used to obtain it is described apart from maximum control point be target control point, and by the target control
Point is made to the mobile distance of initial image direction;
The processing module is also used to the surrounding control point around target control point is mobile to initial image direction
Distance is adjusted, the adjustment at each surrounding control point is inversely proportional at a distance from surrounding control point to target control point apart from size, institute
It states adjustment distance and is less than target control point moving distance.
Referring to Fig. 1, using above-mentioned data processing equipment, the present embodiment also provides a kind of data processing method, comprising:
Step 100, the presetting module obtain accurate image by industrial 3D video camera;
Step 101, the digital point set chosen module and choose preset quantity in the digital spot cloud of accurate image;
Step 102, the computing module are using the digital point in the digital point set as discrete point retrieval digital point
The NURBS relational expression of set;
Step 103, processing end storage have the digital point set of NURBS relational expression for depth perception coding.
Step 104, the 3D video camera obtain the initial image of photographic subjects;
Step 105, the matching module obtain the target depth perceptual coding with the initial Image Matching;
Step 106, the placement module place target depth perceptual coding and initial image overlap to obtain target depth
Distance of the control point to initial image on degree perceptual coding;
Step 107, the processing module obtain it is described apart from maximum control point be target control point, and by the target
The distance is moved to initial image direction in control point;
Step 108, the processing module move at the surrounding control point around target control point to initial image direction
Dynamic adjustment distance to obtain the corresponding depth perception coding of the initial image, the adjustment at each surrounding control point apart from size with
The distance of surrounding control point to target control point is inversely proportional, and the adjustment distance is less than target control point moving distance.
Acquisition module described in above-mentioned steps obtains depth perception coding corresponding with the initial image, the sense of depth
Know and is equipped with relational expression between two target code points on coding.
Embodiment 2
Embodiment 3
The present embodiment is substantially the same manner as Example 2, the difference is that only:
The placement module is also used to encode depth perception and be placed with initial image overlap to obtain depth perception coding
Distance of the upper each digital point to initial image;
One depth perception is encoded, the matching module is used to encode depth perception the distance phase of each digital point
Whole matching value is obtained, the smallest depth perception of whole matching value numerical value is encoded to the target depth perception and compiles
Code.
Specifically, the processing end further includes a division module,
The division module, which is used to encode depth perception, is divided into several regions according to face;
The placement module is used to obtain the upper each digital point of depth perception coding to the distance of initial image;
For each region in depth perception coding, it is described obtain module be used for by digital point each in region away from
From being added to obtain Region Matching value, whole region matching value is added by specific gravity corresponding to face to obtain described whole
With value.
The data processing method of the present embodiment includes:
Depth perception is encoded and is divided into several regions according to face by the division module;
The placement module obtains the upper each digital point of depth perception coding to the distance of initial image;
For each region in depth perception coding, the acquisition module is by the distance phase of digital point each in region
Region Matching value is obtained, whole region matching value is added by specific gravity corresponding to face to obtain the whole matching
Value.
It can be obtained using above-mentioned data processing method and be encoded with the immediate depth perception of initial image, by depth
The adjusting of perceptual coding can obtain the corresponding depth perception coding of the initial image.
Although specific embodiments of the present invention have been described above, it will be appreciated by those of skill in the art that these
It is merely illustrative of, protection scope of the present invention is defined by the appended claims.Those skilled in the art is not carrying on the back
Under the premise of from the principle and substance of the present invention, many changes and modifications may be made, but these are changed
Protection scope of the present invention is each fallen with modification.
Claims (8)
1. a kind of data processing equipment based on 3D video camera, which is characterized in that the data processing equipment is imaged including a 3D
Machine and a processing end, the processing end include an acquisition module,
The 3D video camera is used to obtain the initial image of photographic subjects;
The acquisition module is encoded for obtaining depth perception corresponding with the initial image, on the depth perception encodes
Relational expression is equipped between two target code points.
2. data processing equipment as described in claim 1, which is characterized in that the processing end further include a matching module and
One processing module, the processing end include several depth perception codings, two target code points on each depth perception coding
Between be equipped with relational expression,
The matching module is for obtaining and the target depth perceptual coding of the initial Image Matching;
The processing module is used to adjust the parameter of the target depth perceptual coding according to the initial image.
3. data processing equipment as claimed in claim 2, which is characterized in that the processing end includes a presetting module, a choosing
Modulus block and a computing module,
The presetting module is used to obtain accurate image by industrial 3D video camera;
The module of choosing in the digital spot cloud of accurate image for choosing the digital point set of preset quantity;
The computing module is closed using the digital point in the digital point set as the NURBS of discrete point retrieval number point set
It is formula;
The processing end is used to store the digital point set with NURBS relational expression as depth perception coding.
4. data processing equipment as claimed in claim 2, which is characterized in that the depth perception coding includes pixel layer and knot
Structure layer, the depth perception coding are equipped with several control points for control structure layer shape, and the processing module is for pressing
The parameter of the target depth perceptual coding is adjusted according to the shape adjustment control point of the initial image.
5. data processing equipment as claimed in claim 4, which is characterized in that the processing end includes a placement module,
The placement module is used to place target depth perceptual coding and initial image overlap to obtain target depth perception and compile
Distance of the control point to initial image on code;
The processing module be also used to obtain it is described apart from maximum control point be target control point, and by the target control point
To the mobile distance of initial image direction;
The processing module is also used to adjust the surrounding control point around target control point to initial image direction is mobile
The adjustment of distance, each surrounding control point is inversely proportional at a distance from surrounding control point to target control point apart from size, the tune
Whole distance is less than target control point moving distance.
6. data processing equipment as claimed in claim 4, which is characterized in that the processing end includes a placement module,
The placement module is used to place depth perception coding with initial image overlap upper each to obtain depth perception coding
Distance of the digital point to initial image;
One depth perception is encoded, the distance that the matching module is used to encode depth perception each digital point be added with
Whole matching value is obtained, the smallest depth perception of whole matching value numerical value is encoded to the target depth perceptual coding.
7. data processing equipment as claimed in claim 6, which is characterized in that the processing end further includes a division module,
The division module, which is used to encode depth perception, is divided into several regions according to face;
The placement module is used to obtain the upper each digital point of depth perception coding to the distance of initial image;
For each region in a depth perception coding, the acquisition module is used for the distance phase of digital point each in region
Region Matching value is obtained, whole region matching value is added by specific gravity corresponding to face to obtain the whole matching
Value.
8. a kind of data processing method based on 3D video camera, which is characterized in that the data processing method is wanted by such as right
Data processing equipment described in asking any one of 1 to 7 obtains the depth perception coding of initial image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811008563.XA CN109218703B (en) | 2018-08-31 | 2018-08-31 | Data processing device and method based on 3D camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811008563.XA CN109218703B (en) | 2018-08-31 | 2018-08-31 | Data processing device and method based on 3D camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109218703A true CN109218703A (en) | 2019-01-15 |
CN109218703B CN109218703B (en) | 2020-09-01 |
Family
ID=64985353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811008563.XA Active CN109218703B (en) | 2018-08-31 | 2018-08-31 | Data processing device and method based on 3D camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109218703B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108305312A (en) * | 2017-01-23 | 2018-07-20 | 腾讯科技(深圳)有限公司 | The generation method and device of 3D virtual images |
CN108391116A (en) * | 2018-02-26 | 2018-08-10 | 盎锐(上海)信息科技有限公司 | Total body scan unit based on 3D imaging technique and scan method |
-
2018
- 2018-08-31 CN CN201811008563.XA patent/CN109218703B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108305312A (en) * | 2017-01-23 | 2018-07-20 | 腾讯科技(深圳)有限公司 | The generation method and device of 3D virtual images |
CN108391116A (en) * | 2018-02-26 | 2018-08-10 | 盎锐(上海)信息科技有限公司 | Total body scan unit based on 3D imaging technique and scan method |
Also Published As
Publication number | Publication date |
---|---|
CN109218703B (en) | 2020-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107959778B (en) | Imaging method and device based on dual camera | |
US9544574B2 (en) | Selecting camera pairs for stereoscopic imaging | |
US7920172B2 (en) | Method of controlling an action, such as a sharpness modification, using a colour digital image | |
US9071827B1 (en) | Method and system for automatic 3-D image creation | |
US6900841B1 (en) | Image processing system capable of applying good texture such as blur | |
CN103780840B (en) | Two camera shooting image forming apparatus of a kind of high-quality imaging and method thereof | |
EP3154251A1 (en) | Application programming interface for multi-aperture imaging systems | |
CN104604215A (en) | Image capture apparatus, image capture method and program | |
CN108024054A (en) | Image processing method, device and equipment | |
Marziali et al. | Photogrammetry and macro photography. The experience of the MUSINT II Project in the 3D digitizing process of small size archaeological artifacts | |
CN106023189B (en) | A kind of light field data depth reconstruction method based on matching optimization | |
CN104853080A (en) | Image processing device | |
CN107431755A (en) | Image processing equipment, picture pick-up device, image processing method, program and storage medium | |
CN108053438A (en) | Depth of field acquisition methods, device and equipment | |
CN109218703A (en) | Data processing equipment and method based on 3D video camera | |
US11792511B2 (en) | Camera system utilizing auxiliary image sensors | |
CN109348208A (en) | Perceptual coding acquisition device and method based on 3D video camera | |
CN109151437A (en) | Whole body model building device and method based on 3D video camera | |
CN109218699A (en) | Image processor and method based on 3D video camera | |
CN109089105A (en) | Model generating means and method based on depth perception coding | |
CN110876050B (en) | Data processing device and method based on 3D camera | |
CN108419071A (en) | Filming apparatus and method based on more 3D cameras | |
JP2014160912A (en) | Image processor, imaging apparatus, image processing method and program | |
CN108391113A (en) | Data capture method based on 3D models and model generating means | |
CN109379511A (en) | 3D data safety Encryption Algorithm and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230727 Address after: 201703 Room 2134, Floor 2, No. 152 and 153, Lane 3938, Huqingping Road, Qingpu District, Shanghai Patentee after: Shanghai Qingyan Heshi Technology Co.,Ltd. Address before: 201703 No.206, building 1, no.3938 Huqingping Road, Qingpu District, Shanghai Patentee before: UNRE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd. |