CN109657559A - Point cloud depth degree perceptual coding engine - Google Patents

Point cloud depth degree perceptual coding engine Download PDF

Info

Publication number
CN109657559A
CN109657559A CN201811403521.6A CN201811403521A CN109657559A CN 109657559 A CN109657559 A CN 109657559A CN 201811403521 A CN201811403521 A CN 201811403521A CN 109657559 A CN109657559 A CN 109657559A
Authority
CN
China
Prior art keywords
image
point
target
submodule
digital
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811403521.6A
Other languages
Chinese (zh)
Other versions
CN109657559B (en
Inventor
吴跃华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yujing Information Technology Co.,Ltd.
Original Assignee
Angrui Shanghai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Angrui Shanghai Information Technology Co Ltd filed Critical Angrui Shanghai Information Technology Co Ltd
Priority to CN201811403521.6A priority Critical patent/CN109657559B/en
Publication of CN109657559A publication Critical patent/CN109657559A/en
Application granted granted Critical
Publication of CN109657559B publication Critical patent/CN109657559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of cloud depth degree perceptual coding engine, described cloud depth degree perceptual coding engine includes: to obtain a 3D image;By the 3D image and a prestored presentation storehouse matching, each prestored presentation is equipped with the tensor model for adjusting the digital dot shape of prestored presentation in prestored presentation library;The 3D image that digital point is equipped with tensor model is generated using tensor model on the prestored presentation by artificial intelligence deep learning algorithm;Digital point is deformed by tensor model on the 3D image.Point cloud depth degree perceptual coding engine of the invention can obtain the digital spot cloud of more specification, adjustable shape, so that the 3D image obtained is more easier management, control, and can reduce operation consumed resource.

Description

Point cloud depth degree perceptual coding engine
Technical field
The present invention relates to a kind of cloud depth degree perceptual coding engines.
Background technique
3D video camera, what is utilized is the video camera of 3D camera lens manufacture, usually there are two tools more than pick-up lens, spacing and people Eye spacing is close, can shoot the similar seen different images for being directed to Same Scene of human eye.Holographic 3D has 5 camera lens of disk More than.
The 3D revolution so far of First 3D video camera is unfolded all around Hollywood weight pound sheet and important competitive sports.With The appearance of 3D video camera, this technology distance domestic consumer close step again.After the release of this video camera, we are from now on Each unforgettable moment of life, such as the first step that child steps, celebration of graduating from university etc. can be captured with 3D camera lens.
Usually there are two the above camera lenses for 3D video camera.The function of 3D video camera itself, can be by two just as human brain Lens image is fused together, and becomes a 3D rendering.These images can play on 3D TV, and spectators wear so-called master Dynamic formula shutter glasses may be viewed by, and can also pass through naked eye 3D display equipment direct viewing.3D shutter glasses can be with per second 60 Secondary speed enables the eyeglass fast crosstalk of left and right glasses switch.This means that each eye is it is seen that Same Scene is slightly shown not Same picture, so brain can be thus to be the single photo presented in appreciation with 3D for it.
The image that existing 3D video camera obtains is lack of standardization, uncontrollable defect.
Summary of the invention
The technical problem to be solved by the present invention is to be not easy to locate for the image for overcoming 3D video camera in the prior art to obtain Reason, the defect of control, provide it is a kind of can obtain the digital spot cloud more standardized, make the 3D image obtained be more easier management, The point cloud depth degree perceptual coding engine of control.
The present invention is to solve above-mentioned technical problem by following technical proposals:
A kind of cloud depth degree perceptual coding engine, it is characterized in that, described cloud depth degree perceptual coding engine includes a language Adopted module and a deformation module,
For a 3D image including digital spot cloud, the semantic modules are used for through artificial intelligence deep learning algorithm benefit The semantic information of digital point on the 3D image is perceived with the semantic information of digital point on prestored presentation;
The deformation module is used for raw using tensor model on the prestored presentation by artificial intelligence deep learning algorithm The 3D image of tensor model is equipped at digital point;
Wherein, it is perceptual coding that digital spot cloud, which is equipped with semantic information and the 3D image of tensor model,.
Preferably, the semantic modules include a matched sub-block and a processing submodule, described cloud depth perception Encoding engine further includes an acquisition module,
The acquisition module is for obtaining a 3D image;
The matched sub-block is used for the 3D image and a prestored presentation storehouse matching, each in prestored presentation library to prestore Imaging point on image is marked with semantic information;
The processing submodule is used to utilize imaging point on the prestored presentation by artificial intelligence deep learning algorithm Semantic information perceives the semantic information of imaging point on the 3D image.
Preferably, the semantic modules include a generation submodule,
The generation submodule generates the prestored presentation library for obtaining target image, and the target image is to pass through work Industry obtains accurate image with 3D video camera, and marks semantic information on the target image point on the accurate image.
Preferably, described cloud depth degree perceptual coding engine further includes an acquisition module, the deformation module further includes one Matched sub-block, a processing submodule and a control submodule,
The acquisition module is for obtaining a 3D image;
The matched sub-block is used for the 3D image and a prestored presentation storehouse matching, each in prestored presentation library to prestore Image is equipped with the tensor model for adjusting the digital dot shape of prestored presentation;
The processing submodule is used to utilize tensor model on the prestored presentation by artificial intelligence deep learning algorithm Generate the 3D image that digital point is equipped with tensor model;
The control submodule is deformed for controlling digital point on the 3D image by tensor model.
Preferably, the tensor model is the functional expression of relationship between the expression digital point being arranged on prestored presentation, it is described Processing submodule is used to that the 3D shadow to be arranged using the prestored presentation superior function formula by artificial intelligence deep learning algorithm As the functional expression between upper digital point.
Preferably, each prestored presentation is divided into several regions in prestored presentation library, being equipped in each region indicates same In region between digital point relationship functional expression, the deformation module further includes stroke molecular modules,
The submodule that divides is used to utilize the region position on the prestored presentation by artificial intelligence deep learning algorithm It sets and divides region on the 3D image;
For the target area on the 3D image, the processing submodule by artificial intelligence deep learning for being calculated The function between the digital point on the 3D image in the target area is arranged using the prestored presentation superior function formula for method Formula.
Preferably, a target prestored presentation in prestored presentation library is divided submodule and is prestored for the acquisition target Functional expression in image between adjacent digital point, the functional expression are polynomial function;
Submodule is divided to be also used to obtain several segmentations across adjacent digital point by artificial intelligence deep learning Line, and calculate cut-off rule across the adjacent digital point of whole polynomial function most high-order term number summation, with secondary Number summation divides the region of target prestored presentation lower than the cut-off rule of preset value.
Preferably, the deformation module further includes an adjusting submodule,
The matched sub-block is for matching the 3D image with the target image in the prestored presentation library;
The submodule that adjusts is used to adjust by artificial intelligence deep learning algorithm by the spatial form of the 3D image The spatial form of target image;
Target image of the processing submodule for after adjusting spatial form is equipped with tensor as the digital point The 3D image of model.
Preferably, the adjusting submodule is used for the placement Chong Die with target image of 3D image to obtain number on target image Distance of the word point to 3D image;
The adjusting submodule be also used to obtain it is described apart from maximum digital point be control point, and by the control point to 3D image direction moves target length;
It is described to adjust submodule and be also used to the tensor model using the target image and control around around control point It puts to 3D image direction mobile computing length, computational length size and the surrounding control point to control at each surrounding control point The distance of point is inversely proportional, and the computational length is less than the target length.
Refer to preferably, the control submodule is used to obtain one for adjusting the adjusting of target number point on the 3D image It enables;
The control submodule is also used to according to regulating command that the target number point is long to target direction move Degree;
The control submodule is also used to the tensor model using the 3D image for number around around target number point Word point is to target direction mobile computing length, the computational length size and surrounding digital point of each surrounding digital point to target number The distance of point is inversely proportional, and the computational length is less than the target length.
On the basis of common knowledge of the art, above-mentioned each optimum condition, can any combination to get each preferable reality of the present invention Example.
The positive effect of the present invention is that:
Point cloud depth degree perceptual coding engine of the invention can obtain the digital spot cloud of more specification, adjustable shape, So that the 3D image obtained is more easier management, control, and can reduce operation consumed resource.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of the point cloud depth degree perceptual coding engine of the embodiment of the present invention 1.
Specific embodiment
The present invention is further illustrated below by the mode of embodiment, but does not therefore limit the present invention to the reality It applies among a range.
Embodiment 1
The present embodiment provides a kind of cloud depth degree perceptual coding engine, described cloud depth degree perceptual coding engine includes a language Adopted module 11 and a deformation module 12.
For a 3D image including digital spot cloud, the semantic modules are used for through artificial intelligence deep learning algorithm benefit The semantic information of digital point on the 3D image is perceived with the semantic information of digital point on prestored presentation;
The deformation module is used for raw using tensor model on the prestored presentation by artificial intelligence deep learning algorithm The 3D image of tensor model is equipped at digital point;
Wherein, it is perceptual coding that digital spot cloud, which is equipped with semantic information and the 3D image of tensor model,.
Specifically, the semantic modules include that a matched sub-block 111, one generates submodule 112 and a processing submodule Block 113, described cloud depth degree perceptual coding engine further includes an acquisition module.
The acquisition module is for obtaining a 3D image;
The matched sub-block is used for the 3D image and a prestored presentation storehouse matching, each in prestored presentation library to prestore Imaging point on image is marked with semantic information;
The processing submodule is used to utilize imaging point on the prestored presentation by artificial intelligence deep learning algorithm Semantic information perceives the semantic information of imaging point on the 3D image.
Wherein, the semantic modules include a generation submodule, and described cloud depth degree perceptual coding engine is by generating son Module obtains prestored presentation library.
The generation submodule generates the prestored presentation library for obtaining target image, and the target image is to pass through work Industry obtains accurate image with 3D video camera, and marks semantic information on the target image point on the accurate image.
Digital point (imaging point) in prestored presentation library, markup semantics information by manual markings or can pass through people The identity of work intelligent recognition imaging point, then adds semantic information.Institute's semantic information can record the identity of imaging point, thus Initial 3D image is subjected to digitized processing, machine is allowed to obtain the meaning of each imaging point in image.
The application obtains image in prestored presentation library by learning to the image in standard video (prestored presentation library) Rule, so as to be marked to 3D image, making the semanteme of each digital point of Computer Automatic Recognition, (symbol is contained Meaning is exactly semantic).
Further, the deformation module further includes a matched sub-block 121, one processing submodule 122 and a control Submodule 123.
The acquisition module is for obtaining a 3D image;
The matched sub-block of the deformation module is used for the 3D image and a prestored presentation storehouse matching, prestored presentation library In each prestored presentation be equipped with tensor model for adjusting the digital dot shape of prestored presentation;
The processing submodule is used to utilize tensor model on the prestored presentation by artificial intelligence deep learning algorithm Generate the 3D image that digital point is equipped with tensor model;
The control submodule is deformed for controlling digital point on the 3D image by tensor model.
The tensor model is the functional expression of relationship between the expression digital point being arranged on prestored presentation, the processing submodule Block is used to be arranged by artificial intelligence deep learning algorithm using the prestored presentation superior function formula digital on the 3D image Functional expression between point.
Machine learning by algorithm machine can be learnt from a large amount of data of external world's input to rule, thus into Row identification judgement.The application is obtained in prestored presentation library by learning to the image in standard video (prestored presentation library) The rule of image, so as to obtain the fluctuating rule of faceform, the relationship model as existing for the curve and nose of nose is led to The relationship crossed between digital point can establish tensor model, tensor be one can be used to indicate some vectors, scalar sum other The polyteny function of linear relationship between tensor.
Further, each prestored presentation is divided into several regions in prestored presentation library, and being equipped in each region indicates same In one region between digital point relationship functional expression, the deformation module further includes stroke molecular modules,
The submodule that divides is used to utilize the region position on the prestored presentation by artificial intelligence deep learning algorithm It sets and divides region on the 3D image;
For the target area on the 3D image, the processing submodule by artificial intelligence deep learning for being calculated The function between the digital point on the 3D image in the target area is arranged using the prestored presentation superior function formula for method Formula.
Since the connection between each imaging point is extremely complex, if can involve from whole to calculate an imaging point movement Which imaging point, calculation amount is very huge, therefore the more apparent imaging point of interaction relation is divided into the same area, thus The connection of imaging point, can reduce calculation amount outside cutting and region.
The deformation module of the present embodiment is also used to divide region.
For a target prestored presentation in prestored presentation library, submodule is divided for phase in the acquisition target prestored presentation Functional expression between adjacent digital point, the functional expression are polynomial function;
Submodule is divided to be also used to obtain several segmentations across adjacent digital point by artificial intelligence deep learning Line, and calculate cut-off rule across the adjacent digital point of whole polynomial function most high-order term number summation, with secondary Number summation divides the region of target prestored presentation lower than the cut-off rule of preset value.
The control submodule makes adjusting control to the shape of perceptual coding after receiving regulating command, specifically:
The control submodule is used to obtain one for adjusting the regulating command of target number point on the 3D image;
The control submodule is also used to according to regulating command that the target number point is long to target direction move Degree;
The control submodule is also used to the tensor model using the 3D image for number around around target number point Word point is to target direction mobile computing length, the computational length size and surrounding digital point of each surrounding digital point to target number The distance of point is inversely proportional, and the computational length is less than the target length.
Embodiment 2
The present embodiment is substantially the same manner as Example 1, difference mitigation with.
The deformation module further includes an adjusting submodule.
The matched sub-block of the deformation module is used for the target shadow in the 3D image and the prestored presentation library As matching;
The submodule that adjusts is used to adjust by artificial intelligence deep learning algorithm by the spatial form of the 3D image The spatial form of target image;
Target image of the processing submodule for after adjusting spatial form is equipped with tensor as the digital point The 3D image of model.
Adjust the concrete mode that submodule adjusts 3D image are as follows:
The adjusting submodule is used to arrive the placement Chong Die with target image of 3D image to obtain digital point on target image The distance of 3D image;
The adjusting submodule be also used to obtain it is described apart from maximum digital point be control point, and by the control point to 3D image direction moves target length;
It is described to adjust submodule and be also used to the tensor model using the target image and control around around control point It puts to 3D image direction mobile computing length, computational length size and the surrounding control point to control at each surrounding control point The distance of point is inversely proportional, and the computational length is less than the target length.
Although specific embodiments of the present invention have been described above, it will be appreciated by those of skill in the art that these It is merely illustrative of, protection scope of the present invention is defined by the appended claims.Those skilled in the art is not carrying on the back Under the premise of from the principle and substance of the present invention, many changes and modifications may be made, but these are changed Protection scope of the present invention is each fallen with modification.

Claims (10)

1. a kind of cloud depth degree perceptual coding engine, which is characterized in that described cloud depth degree perceptual coding engine includes one semantic Module and a deformation module,
For a 3D image including digital spot cloud, the semantic modules are used for through artificial intelligence deep learning algorithm using in advance The semantic information of digital point on image is deposited to perceive the semantic information of digital point on the 3D image;
The deformation module is used to generate number using tensor model on the prestored presentation by artificial intelligence deep learning algorithm Word point is equipped with the 3D image of tensor model;
Wherein, it is perceptual coding that digital spot cloud, which is equipped with semantic information and the 3D image of tensor model,.
2. point cloud depth degree perceptual coding engine as described in claim 1, which is characterized in that the semantic modules include a matching Submodule and a processing submodule, described cloud depth degree perceptual coding engine further includes an acquisition module,
The acquisition module is for obtaining a 3D image;
The matched sub-block is used for the 3D image and a prestored presentation storehouse matching, each prestored presentation in prestored presentation library On imaging point be marked with semantic information;
The processing submodule is used for the semanteme by artificial intelligence deep learning algorithm using imaging point on the prestored presentation Information perceives the semantic information of imaging point on the 3D image.
3. semantic modules as claimed in claim 2, which is characterized in that the semantic modules include a generation submodule,
The generation submodule generates the prestored presentation library for obtaining target image, and the target image is by industrial 3D video camera obtains accurate image, and marks semantic information on the target image point on the accurate image.
4. point cloud depth degree perceptual coding engine as described in claim 1, which is characterized in that described cloud depth degree perceptual coding draws Holding up further includes an acquisition module, and the deformation module further includes a matched sub-block, a processing submodule and a control submodule Block,
The acquisition module is for obtaining a 3D image;
The matched sub-block is used for the 3D image and a prestored presentation storehouse matching, each prestored presentation in prestored presentation library It is equipped with the tensor model for adjusting the digital dot shape of prestored presentation;
The processing submodule is used to generate by artificial intelligence deep learning algorithm using tensor model on the prestored presentation Digital point is equipped with the 3D image of tensor model;
The control submodule is deformed for controlling digital point on the 3D image by tensor model.
5. point cloud depth degree perceptual coding engine as claimed in claim 4, which is characterized in that the tensor model is prestored presentation The functional expression of relationship between the expression digital point of upper setting, the processing submodule are used to pass through artificial intelligence deep learning algorithm Functional expression on the 3D image between digital point is set using the prestored presentation superior function formula.
6. point cloud depth degree perceptual coding engine as claimed in claim 5, which is characterized in that each in prestored presentation library to prestore shadow As being divided into several regions, the functional expression for the relationship between digital point in the same area that indicates, the deformation are equipped in each region Module further includes stroke molecular modules,
The submodule that divides is used to exist by artificial intelligence deep learning algorithm using the regional location on the prestored presentation Region is divided on the 3D image;
For the target area on the 3D image, the processing submodule is used for through artificial intelligence deep learning algorithm benefit Functional expression between digital point on the 3D image in the target area is set with the prestored presentation superior function formula.
7. point cloud depth degree perceptual coding engine as claimed in claim 6, which is characterized in that
For a target prestored presentation in prestored presentation library, submodule is divided for consecutive number in the acquisition target prestored presentation Functional expression between word point, the functional expression are polynomial function;
It divides submodule and is also used to obtain several cut-off rules across adjacent digital point by artificial intelligence deep learning, and Calculate cut-off rule across the adjacent digital point of whole polynomial function most high-order term number summation, with number summation Lower than the region that the cut-off rule of preset value divides target prestored presentation.
8. point cloud depth degree perceptual coding engine as claimed in claim 4, which is characterized in that the deformation module further includes a tune Knot module,
The matched sub-block is for matching the 3D image with the target image in the prestored presentation library;
The submodule that adjusts is used to adjust target by the spatial form of the 3D image by artificial intelligence deep learning algorithm The spatial form of image;
Target image of the processing submodule for after adjusting spatial form is equipped with tensor model as the digital point 3D image.
9. point cloud depth degree perceptual coding engine as claimed in claim 8, which is characterized in that
The adjusting submodule is used to place 3D image is Chong Die with target image to obtain on target image digital point to 3D shadow The distance of picture;
The adjusting submodule be also used to obtain it is described apart from maximum digital point be control point, and by the control point to 3D shadow As the mobile target length of direction;
It is described adjust submodule be also used to tensor model using the target image by the surrounding control point around control point to 3D image direction mobile computing length, the computational length size at each surrounding control point and surrounding control point to control point Distance is inversely proportional, and the computational length is less than the target length.
10. point cloud depth degree perceptual coding engine as claimed in claim 4, which is characterized in that
The control submodule is used to obtain one for adjusting the regulating command of target number point on the 3D image;
The control submodule is also used to the target number point according to regulating command to target direction move length;
The control submodule is also used to utilize the tensor model of the 3D image by surrounding's digital point around target number point To target direction mobile computing length, computational length size and the surrounding digital point to target number point of each surrounding digital point Distance is inversely proportional, and the computational length is less than the target length.
CN201811403521.6A 2018-11-23 2018-11-23 Point cloud depth perception coding engine device Active CN109657559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811403521.6A CN109657559B (en) 2018-11-23 2018-11-23 Point cloud depth perception coding engine device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811403521.6A CN109657559B (en) 2018-11-23 2018-11-23 Point cloud depth perception coding engine device

Publications (2)

Publication Number Publication Date
CN109657559A true CN109657559A (en) 2019-04-19
CN109657559B CN109657559B (en) 2023-02-07

Family

ID=66112269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811403521.6A Active CN109657559B (en) 2018-11-23 2018-11-23 Point cloud depth perception coding engine device

Country Status (1)

Country Link
CN (1) CN109657559B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412713A (en) * 2021-05-26 2022-11-29 荣耀终端有限公司 Method and device for predicting, encoding and decoding point cloud depth information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined
CN103744086A (en) * 2013-12-23 2014-04-23 北京建筑大学 High-precision registration method for ground laser radar and close-range photography measurement data
CN104298995A (en) * 2014-05-06 2015-01-21 深圳市唯特视科技有限公司 Three-dimensional face identification device and method based on three-dimensional point cloud
CN104504410A (en) * 2015-01-07 2015-04-08 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
CN108171217A (en) * 2018-01-29 2018-06-15 深圳市唯特视科技有限公司 A kind of three-dimension object detection method based on converged network
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN108833890A (en) * 2018-08-08 2018-11-16 盎锐(上海)信息科技有限公司 Data processing equipment and method based on camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined
CN103744086A (en) * 2013-12-23 2014-04-23 北京建筑大学 High-precision registration method for ground laser radar and close-range photography measurement data
CN104298995A (en) * 2014-05-06 2015-01-21 深圳市唯特视科技有限公司 Three-dimensional face identification device and method based on three-dimensional point cloud
CN104504410A (en) * 2015-01-07 2015-04-08 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
CN108229366A (en) * 2017-12-28 2018-06-29 北京航空航天大学 Deep learning vehicle-installed obstacle detection method based on radar and fusing image data
CN108171217A (en) * 2018-01-29 2018-06-15 深圳市唯特视科技有限公司 A kind of three-dimension object detection method based on converged network
CN108833890A (en) * 2018-08-08 2018-11-16 盎锐(上海)信息科技有限公司 Data processing equipment and method based on camera

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115412713A (en) * 2021-05-26 2022-11-29 荣耀终端有限公司 Method and device for predicting, encoding and decoding point cloud depth information

Also Published As

Publication number Publication date
CN109657559B (en) 2023-02-07

Similar Documents

Publication Publication Date Title
EP3035681B1 (en) Image processing method and apparatus
US9438878B2 (en) Method of converting 2D video to 3D video using 3D object models
US20170085863A1 (en) Method of converting 2d video to 3d video using machine learning
CN108124509B (en) Image display method, wearable intelligent device and storage medium
CN104112275A (en) Image segmentation method and device
WO2014008320A1 (en) Systems and methods for capture and display of flex-focus panoramas
KR101950934B1 (en) Virtual reality image providing device, method and program for adjusting virtual space size to provide stereopsis
CN105263011B (en) Multi-view image shows equipment and its multi-view image display methods
EP3827374A1 (en) Methods and apparatuses for corner detection
CN109636926B (en) 3D global free deformation method and device
CN109657559A (en) Point cloud depth degree perceptual coding engine
CN105072429B (en) A kind of projecting method and device
CN111596763B (en) Control method and device of virtual reality equipment
CN111161399B (en) Data processing method and assembly for generating three-dimensional model based on two-dimensional image
CN108156442A (en) A kind of three-dimensional imaging processing method, device and electronic equipment
CN108093243A (en) A kind of three-dimensional imaging processing method, device and stereoscopic display device
WO2014119555A1 (en) Image processing device, display device and program
Liu et al. Stereo-based bokeh effects for photography
CN109657702B (en) 3D depth semantic perception method and device
US9197874B1 (en) System and method for embedding stereo imagery
US9432654B2 (en) Modifying fusion offset data in sequential stereoscopic image frames
CN102595167B (en) Depth uniformization method and device for 2D/3D video conversion
KR20170142896A (en) Method and apparatus for providing personal 3-dimensional image using convergence matching algorithm
CN106324836A (en) Novel 3D virtual reality glasses display system and working process thereof
CN109448066A (en) The ultimate attainment compression algorithm of 3D data and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230406

Address after: 518000 1101-g1, BIC science and technology building, No. 9, scientific research road, Maling community, Yuehai street, Nanshan District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Yujing Information Technology Co.,Ltd.

Address before: 201703 No.206, building 1, no.3938 Huqingping Road, Qingpu District, Shanghai

Patentee before: UNRE (SHANGHAI) INFORMATION TECHNOLOGY Co.,Ltd.