CN109151437B - Whole body modeling device and method based on 3D camera - Google Patents

Whole body modeling device and method based on 3D camera Download PDF

Info

Publication number
CN109151437B
CN109151437B CN201811008505.7A CN201811008505A CN109151437B CN 109151437 B CN109151437 B CN 109151437B CN 201811008505 A CN201811008505 A CN 201811008505A CN 109151437 B CN109151437 B CN 109151437B
Authority
CN
China
Prior art keywords
image
module
depth perception
initial sub
whole
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811008505.7A
Other languages
Chinese (zh)
Other versions
CN109151437A (en
Inventor
沈勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qingyan Heshi Technology Co ltd
Original Assignee
Angrui Shanghai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Angrui Shanghai Information Technology Co Ltd filed Critical Angrui Shanghai Information Technology Co Ltd
Priority to CN201811008505.7A priority Critical patent/CN109151437B/en
Publication of CN109151437A publication Critical patent/CN109151437A/en
Application granted granted Critical
Publication of CN109151437B publication Critical patent/CN109151437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a whole body modeling device and a method based on a 3D camera, wherein the whole body modeling device comprises at least one 3D camera and a processing end, the processing end comprises a dividing module, an acquisition module, a processing module and a splicing module, and the at least one 3D camera is used for acquiring an initial whole body image of a shooting target; the dividing module is used for dividing the initial whole-body image into a plurality of initial sub-images; the acquisition module is used for acquiring depth perception codes matched with each initial sub-image; the processing module is used for adjusting parameters of corresponding depth perception coding according to the initial sub-image; the splicing module is used for splicing the depth perception codes with the adjusted parameters into whole body depth perception codes of the shooting target. The invention can acquire the digital 3D image of the whole body, so that the acquired 3D image is easier to manage and control, the resource consumed by operation can be reduced, and the space occupied by the 3D image is reduced.

Description

Whole body modeling device and method based on 3D camera
Technical Field
The invention relates to a whole body modeling device and method based on a 3D camera.
Background
The 3D camera, which is manufactured by using a 3D lens, generally has two or more image pickup lenses, and has a pitch close to the pitch of human eyes, and can capture different images of the same scene seen by similar human eyes. The holographic 3D has a disc 5 above the lens, and can view the same image in all directions through dot grating imaging or -shaped grating holographic imaging, such as being in the environment.
The first 3D camera to date the 3D revolution has all been around the hollywood heavy-pound large and major sporting events. With the advent of 3D cameras, this technology is one step closer to home users. After the camera is introduced, each memorable moment of the life, such as the first step taken by a child, a university graduation celebration and the like, can be captured by using a 3D lens in the future.
A 3D camera typically has more than two lenses. The 3D camera functions like a human brain, and can fuse two lens images together to form a 3D image. These images can be played on a 3D television, and can be viewed by viewers wearing so-called actively shuttered glasses, or directly viewed by naked-eye 3D display devices. The 3D shutter glasses can rapidly alternately open and close the lenses of the left and right glasses at a rate of 60 times per second. This means that each eye sees a slightly different picture of the same scene, so the brain can thus think that it is enjoying a single picture in 3D.
The existing 3D camera has the defects that the acquired images are not easy to process and control, and the 3D images occupy larger space.
Disclosure of Invention
The invention aims to overcome the defects that images acquired by a 3D camera are not easy to process and control and the occupied space of the 3D images is large in the prior art, and provides a whole body modeling device and method based on the 3D camera, which can acquire whole body digital 3D images, enable the acquired 3D images to be easier to manage and control, reduce the resources consumed by operation and reduce the occupied space of the 3D images.
The invention solves the technical problems through the following technical scheme:
a whole body modeling device based on a 3D camera is characterized by comprising at least one 3D camera and a processing end, wherein the processing end comprises a dividing module, an acquisition module, a processing module and a splicing module,
the at least one 3D camera is used for acquiring an initial whole body image of a shooting target;
the dividing module is used for dividing the initial whole-body image into a plurality of initial sub-images;
the acquisition module is used for acquiring depth perception codes matched with each initial sub-image;
the processing module is used for adjusting parameters of corresponding depth perception coding according to the initial sub-image;
the splicing module is used for splicing the depth perception codes with the adjusted parameters into whole body depth perception codes of the shooting target.
Preferably, the initial whole-body image includes a structural layer and a pixel layer,
the acquisition module is used for acquiring depth perception codes matched with the structural layer of each initial sub-image;
the processing module is further configured to add a pixel layer corresponding to the initial sub-picture to the depth-aware coding;
the splicing module is used for splicing the depth perception codes which are adjusted in parameters and added with the pixel layers into the whole body depth perception codes of the shooting target.
Preferably, the processing terminal further comprises a selecting module,
the selection module is used for selecting at least 3 splicing characteristic points on the initial sub-image;
the splicing module is used for splicing the depth perception codes of the adjusted parameters into the whole-body depth perception codes of the shooting targets through splicing feature point superposition.
Preferably, the processing terminal further comprises a determining module,
the judging module is used for judging whether the difference between the area of the pixel layer of the initial sub-image and the area of the area occupied by the depth perception code corresponding to the initial sub-image reaches a preset value or not, and if so, acquiring pixel layer compensation data of the initial sub-image according to the content of the pixel layer of the initial sub-image;
the processing module is further configured to add a pixel layer and pixel layer compensation data corresponding to the initial sub-image to the depth-aware coding.
Preferably, the parameter comprises the number of digital point clouds.
Preferably, the depth perception code includes a pixel layer and a structural layer, the depth perception code is provided with a plurality of control points for controlling the shape of the structural layer, and the processing module is configured to adjust the control points according to the shape of the initial image to adjust the parameters of the target depth perception code.
Preferably, the processing end comprises a placing module,
the placement module is used for placing the target depth perception code and the initial sub-image in an overlapping mode to obtain the distance from a control point on the target depth perception code to the initial image;
the processing module is further configured to acquire the control point with the largest distance as a target control point, and move the target control point by the distance towards the direction of the initial image;
the processing module is further configured to move the peripheral control points around the target control point in the direction of the initial image by an adjustment distance, the adjustment distance of each peripheral control point is inversely proportional to the distance from the peripheral control point to the target control point, and the adjustment distance is smaller than the movement distance of the target control point.
The invention also provides a whole body modeling method based on the 3D camera, which is characterized in that the whole body modeling method obtains the whole body depth perception code of the shooting target through the whole body modeling device.
On the basis of the common knowledge in the field, the above preferred conditions can be combined randomly to obtain the preferred embodiments of the invention.
The positive progress effects of the invention are as follows:
the whole body modeling device and method based on the 3D camera can acquire the digital 3D images of the whole body, so that the acquired 3D images are easier to manage and control, the resources consumed by calculation can be reduced, and the space occupied by the 3D images is reduced.
Drawings
Fig. 1 is a flowchart of a whole body modeling method of embodiment 1 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
The embodiment provides a whole body modeling device based on 3D camera, whole body modeling device includes 3D cameras and a processing end, 3 cameras are arranged from top to bottom in proper order. The processing terminal can be a computer terminal, a mobile phone terminal or a cloud server.
The processing end comprises a dividing module, an obtaining module, a processing module, a selecting module, a judging module and a splicing module.
The 3D cameras are used for acquiring initial whole-body images of the shot targets, each camera acquires a part of initial images of the shot targets, and then the initial images are spliced to form the initial whole-body images.
The initial whole-body image includes a structural layer and a pixel layer.
The dividing module is used for dividing the initial whole-body image into a plurality of initial sub-images.
The obtaining module is configured to obtain a depth perception code matched with each initial sub-image, and specifically, the obtaining module is configured to obtain a depth perception code matched with a structural layer of each initial sub-image.
According to the depth perception coding method, the initial 3D point cloud and the RGB image can be used for acquiring more standard digital point cloud through artificial intelligence, the digital point cloud is provided with a label, and the linkage relation among all the digital points can also be acquired through artificial intelligence learning.
And the processing module is used for adjusting the parameters of the corresponding depth perception coding according to the initial sub-image. The processing module is further configured to add a pixel layer corresponding to the initial sub-picture on the depth-aware coding.
The splicing module is used for splicing the depth perception codes which are adjusted in parameters and added with the pixel layers into the whole body depth perception codes of the shooting target.
The specific splicing mode of the splicing module is as follows:
the selection module is used for selecting at least 3 splicing characteristic points on the initial sub-image;
the splicing module is used for splicing the depth perception codes of the adjusted parameters into the whole-body depth perception codes of the shooting targets through splicing feature point superposition.
The judging module is used for judging whether the difference between the area of the pixel layer of the initial sub-image and the area of the area occupied by the depth perception code corresponding to the initial sub-image reaches a preset value or not, and if so, acquiring pixel layer compensation data of the initial sub-image according to the content of the pixel layer of the initial sub-image;
the processing module is further configured to add a pixel layer and pixel layer compensation data corresponding to the initial sub-image to the depth-aware coding.
The parameter includes a number of digital point clouds.
Referring to fig. 1, with the whole-body modeling apparatus, the present embodiment further provides a whole-body modeling method, including:
and step 100, acquiring an initial whole-body image of the shooting target.
Step 101, dividing the initial whole-body image into a plurality of initial sub-images.
Step 102, obtaining depth perception codes matched with the structural layer of each initial sub-image;
and 103, adjusting parameters of the corresponding depth perception codes according to the initial sub-images.
And step 104, adding a pixel layer corresponding to the initial sub-image on the depth perception coding.
Step 104 further includes determining whether a difference between an area of a pixel layer of the initial sub-image and an area of a region occupied by the depth sensing code corresponding to the initial sub-image reaches a predetermined value, if so, obtaining pixel layer compensation data of the initial sub-image according to a content of the pixel layer of the initial sub-image, and then adding the pixel layer and the pixel layer compensation data corresponding to the initial sub-image to the depth sensing code.
105, selecting at least 3 splicing feature points on the initial sub-image;
and 106, splicing the adjusted parameters and the depth perception codes added with the pixel layers into the whole-body depth perception codes of the shooting targets through superposition of the splicing characteristic points.
And adding pixel layer and pixel layer compensation data corresponding to the initial sub-image on the depth perception coding.
Example 2
This embodiment is substantially the same as embodiment 1 except that:
the depth perception code comprises a pixel layer and a structural layer, a plurality of control points used for controlling the shape of the structural layer are arranged on the depth perception code, and the processing module is used for adjusting the control points according to the shape of the initial image so as to adjust the parameters of the target depth perception code.
The processing end comprises a placing module which is provided with a placing module,
the placement module is used for placing the target depth perception code and the initial sub-image in an overlapping mode to obtain the distance from a control point on the target depth perception code to the initial image;
the processing module is further configured to acquire the control point with the largest distance as a target control point, and move the target control point by the distance towards the direction of the initial image;
the processing module is further configured to move the peripheral control points around the target control point in the direction of the initial image by an adjustment distance, the adjustment distance of each peripheral control point is inversely proportional to the distance from the peripheral control point to the target control point, and the adjustment distance is smaller than the movement distance of the target control point.
The whole-body modeling method of the present embodiment includes:
overlapping the target depth perception code and the initial sub-image to obtain the distance from a control point on the target depth perception code to the initial image;
acquiring a control point with the largest distance as a target control point, and moving the target control point to the direction of the initial image by the distance;
and moving the peripheral control points around the target control point to the direction of the initial image by an adjusting distance, wherein the adjusting distance of each peripheral control point is inversely proportional to the distance from the peripheral control point to the target control point, and the adjusting distance is less than the moving distance of the target control point.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (6)

1. A whole body modeling device based on a 3D camera is characterized by comprising at least one 3D camera and a processing end, wherein the processing end comprises a dividing module, an acquisition module, a processing module and a splicing module,
the at least one 3D camera is used for acquiring an initial whole body image of a shooting target;
the dividing module is used for dividing the initial whole-body image into a plurality of initial sub-images;
the acquisition module is used for acquiring depth perception codes matched with each initial sub-image;
the processing module is used for adjusting parameters of corresponding depth perception coding according to the initial sub-image;
the splicing module is used for splicing the depth perception codes with the adjusted parameters into whole body depth perception codes of the shooting target;
the processing end also comprises a judging module,
the judging module is used for judging whether the difference between the area of the pixel layer of the initial sub-image and the area of the area occupied by the depth perception code corresponding to the initial sub-image reaches a preset value or not, and if so, acquiring pixel layer compensation data of the initial sub-image according to the content of the pixel layer of the initial sub-image;
the processing module is further used for adding a pixel layer corresponding to the initial sub-image and pixel layer compensation data on the depth perception coding;
the processing end comprises a placing module which is provided with a placing module,
the placement module is used for placing the matched depth perception code and the initial sub-image in an overlapping mode to obtain the distance from a control point on the matched depth perception code to the initial sub-image;
the processing module is further configured to acquire the control point with the largest distance as a target control point, and move the target control point by the distance towards the direction of the initial sub-image;
the processing module is further configured to move the peripheral control points around the target control point in the direction of the initial sub-image by an adjustment distance, the adjustment distance of each peripheral control point is inversely proportional to the distance from the peripheral control point to the target control point, and the adjustment distance is smaller than the movement distance of the target control point.
2. The whole-body modeling apparatus of claim 1, wherein the initial whole-body image includes a structural layer and a pixel layer,
the acquisition module is used for acquiring depth perception codes matched with the structural layer of each initial sub-image;
the processing module is further configured to add a pixel layer corresponding to the initial sub-picture to the depth-aware coding;
the splicing module is used for splicing the depth perception codes which are adjusted in parameters and added with the pixel layers into the whole body depth perception codes of the shooting target.
3. The whole body modeling apparatus of claim 1, wherein the processing end further comprises a selecting module,
the selection module is used for selecting at least 3 splicing characteristic points on the initial sub-image;
the splicing module is used for splicing the depth perception codes of the adjusted parameters into the whole-body depth perception codes of the shooting targets through splicing feature point superposition.
4. The whole body modeling apparatus of claim 1, wherein the parameter comprises a number of digital point clouds.
5. The whole-body modeling apparatus according to claim 1, wherein the depth-aware coding includes a pixel layer and a structural layer, the depth-aware coding has a plurality of control points for controlling the shape of the structural layer, and the processing module is configured to adjust the control points according to the shape of the initial sub-image to adjust the parameters of the matched depth-aware coding.
6. A whole-body modeling method based on a 3D camera, characterized in that the whole-body modeling method obtains a whole-body depth perception code of a photographic subject by the whole-body modeling apparatus according to any one of claims 1 to 5.
CN201811008505.7A 2018-08-31 2018-08-31 Whole body modeling device and method based on 3D camera Active CN109151437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811008505.7A CN109151437B (en) 2018-08-31 2018-08-31 Whole body modeling device and method based on 3D camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811008505.7A CN109151437B (en) 2018-08-31 2018-08-31 Whole body modeling device and method based on 3D camera

Publications (2)

Publication Number Publication Date
CN109151437A CN109151437A (en) 2019-01-04
CN109151437B true CN109151437B (en) 2020-09-01

Family

ID=64825790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811008505.7A Active CN109151437B (en) 2018-08-31 2018-08-31 Whole body modeling device and method based on 3D camera

Country Status (1)

Country Link
CN (1) CN109151437B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043855A (en) * 2004-10-22 2007-09-26 皇家飞利浦电子股份有限公司 Real time stereoscopic imaging apparatus and method
CN103180883A (en) * 2010-10-07 2013-06-26 桑格威迪公司 Rapid 3d modeling
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN104103090A (en) * 2013-04-03 2014-10-15 北京三星通信技术研究有限公司 Image processing method, customized human body display method and image processing system
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN108391116A (en) * 2018-02-26 2018-08-10 盎锐(上海)信息科技有限公司 Total body scan unit based on 3D imaging technique and scan method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1145349A (en) * 1997-07-29 1999-02-16 Olympus Optical Co Ltd Wire frame model matching device, method therefor and recording medium
US20040136590A1 (en) * 2002-09-20 2004-07-15 Albert-Jan Brouwer Means of partitioned matching and selective refinement in a render, match, and refine iterative 3D scene model refinement system through propagation of model element identifiers

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043855A (en) * 2004-10-22 2007-09-26 皇家飞利浦电子股份有限公司 Real time stereoscopic imaging apparatus and method
CN103180883A (en) * 2010-10-07 2013-06-26 桑格威迪公司 Rapid 3d modeling
CN104103090A (en) * 2013-04-03 2014-10-15 北京三星通信技术研究有限公司 Image processing method, customized human body display method and image processing system
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN108391116A (en) * 2018-02-26 2018-08-10 盎锐(上海)信息科技有限公司 Total body scan unit based on 3D imaging technique and scan method

Also Published As

Publication number Publication date
CN109151437A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN107925751B (en) System and method for multiple views noise reduction and high dynamic range
CN107959778B (en) Imaging method and device based on dual camera
CN105814875B (en) Selecting camera pairs for stereo imaging
US8208048B2 (en) Method for high dynamic range imaging
CN103370943B (en) Imaging device and formation method
CN108347505B (en) Mobile terminal with 3D imaging function and image generation method
RU2565855C1 (en) Image capturing device, method of controlling said device and programme
CN108600729B (en) Dynamic 3D model generation device and image generation method
CN104580878A (en) Automatic effect method for photography and electronic apparatus
KR20160015737A (en) Image photographig apparatus and method for photographing image
CN103782586A (en) Imaging device
CN103918249B (en) Imaging device and imaging method
CN103874960A (en) Monocular stereoscopic imaging device, imaging method, and program
CN108111835B (en) Shooting device, system and method for 3D image imaging
CN108391116B (en) Whole body scanning device and method based on 3D imaging technology
CN108513122B (en) Model adjusting method and model generating device based on 3D imaging technology
CN108737808B (en) 3D model generation device and method
CN109151437B (en) Whole body modeling device and method based on 3D camera
CN109636926B (en) 3D global free deformation method and device
CN109218699B (en) Image processing device and method based on 3D camera
CN109218703B (en) Data processing device and method based on 3D camera
CN110876050B (en) Data processing device and method based on 3D camera
CN109348208B (en) Perception code acquisition device and method based on 3D camera
CN108195308B (en) 3D scanning device, system and method
CN109272453B (en) Modeling device and positioning method based on 3D camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230728

Address after: 201703 Room 2134, Floor 2, No. 152 and 153, Lane 3938, Huqingping Road, Qingpu District, Shanghai

Patentee after: Shanghai Qingyan Heshi Technology Co.,Ltd.

Address before: 201703 No.206, building 1, no.3938 Huqingping Road, Qingpu District, Shanghai

Patentee before: UNRE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd.