CN109963135A - A kind of depth network camera device and method based on RGB-D - Google Patents
A kind of depth network camera device and method based on RGB-D Download PDFInfo
- Publication number
- CN109963135A CN109963135A CN201711414598.9A CN201711414598A CN109963135A CN 109963135 A CN109963135 A CN 109963135A CN 201711414598 A CN201711414598 A CN 201711414598A CN 109963135 A CN109963135 A CN 109963135A
- Authority
- CN
- China
- Prior art keywords
- depth
- data stream
- video data
- speckle
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/167—Synchronising or controlling image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
This disclosure relates to a kind of depth network camera device and method based on RGB-D, described device passes through the speckle encoding projector to target object or scene simulation speckle pattern, it is acquired using imaging sensor by the speckle pattern of target object or scene reflectivity, obtain speckle encoding image, inputting video data stream is formed, and the inputting video data stream is exported and gives depth calculation module;The depth calculation module, which is combined, carries out depth calculation, depth-grey scale mapping, video scaling and Data Format Transform to inputting video data with reference to speckle image, obtains initial video data stream;To each frame of initial video data stream, described device is corresponded to behind frame by the way that the frame to be inserted into the data flow for carrying RGB information, and obtained data flow is then carried out compression and compressed data stream transmitting to terminal is carried out three-dimensional imaging.Disclosure device or method can quickly and accurately obtain target object depth information, and can facilitate, carry out three-dimensional imaging in real time.
Description
Technical field
This disclosure relates to image procossing, machine vision and technical field of video monitoring, and in particular to a kind of based on RGB-D's
Depth network shooting method and device.
Background technique
There is limitation in terms of real-time, operation are using simplicity in existing depth acquisition device.As binocular solid images
Head technology maturation, but fail to generate in real time and export depth map sequence;American Apple Inc in 2013 newly applies for that an invention is special
Sharp " Depth Perception Device and System, depth perception equipment and system ", using Laser emission figure, takes the photograph
Depth distance is calculated after obtaining speckle pattern as head, virtual interacting and input of the technology possible as its following innovative product
Equipment is applied.Intel company's in January, 2014 releases embedded 3D depth camera, it is believed that " virtual world just infinitely connects
It is bordering on real world;Interactive mode will become more natural (natural), intuitive (intuitive) and on the spot in person
(immersive)".A kind of apparatus that can quickly and accurately obtain target object depth information is developed to have become both at home and abroad
The hot and difficult issue of relevant industries research.
Summary of the invention
In consideration of it, present disclose provides a kind of depth network camera device and method based on RGB-D, with real-time, conveniently
Ground carries out three-dimensional imaging.The technical solution of the disclosure is as follows.
On the one hand, present disclose provides a kind of depth network camera device based on RGB-D, described device include that speckle is compiled
The code projector, imaging sensor, depth calculation module, network transmission module, terminal;
The speckle encoding projector is used for target object or scene simulation speckle pattern;
Described image sensor obtains speckle encoding figure for acquiring the speckle pattern by target object or scene reflectivity
Picture forms inputting video data stream, and the inputting video data stream is exported and gives depth calculation module;
The depth calculation module carries out depth calculation, depth-to inputting video data with reference to speckle image for combining
Grey scale mapping, video scaling and Data Format Transform obtain initial video data stream;
The frame is inserted into carrying RGB information for each frame to initial video data stream by the network transmission module
Data flow in correspond to behind frame, obtained data flow is then subjected to compression and by compressed data stream transmitting to terminal
Carry out three-dimensional imaging;
Depth-the grey scale mapping is that calculated depth information is mapped as gray scale according to depth-grey scale mapping relationship
Value indicates the range information of projecting space and target object using depth map;
The video scaling is to zoom in and out in proportion to depth map;
Format conversion carries out Data Format Transform for the depth map after scaling, to be suitble to pass in DTN network
It is defeated.
In said device, in which: described image sensor by MIPI or parallel port by inputting video data stream export to
Depth calculation module.
In said device, in which: the network transmission module is transmitted using RJ45 or WIFI as medium.
In said device, in which: be transmitted through following methods realization in the network transmission module:
At interval of K data packet, 1 coding packet is generated, intelligent terminal is transmitted to by RJ45 or WIFI;
Wherein K is setting value.
In said device, in which: the network transmission module carries out received data flow before carrying out three-dimensional imaging
Decompression, format conversion, depth-gray scale inverse mapping obtain depth information and RGB information.
On the other hand, the depth network shooting method based on RGB-D that present disclose provides a kind of, the method includes following
Step:
S100, by the speckle encoding projector in target object or scene simulation speckle pattern;
S200, it is acquired by imaging sensor by the speckle pattern of target object or scene reflectivity, obtains speckle encoding figure
Picture forms inputting video data stream;
S300, depth calculation, depth-grey scale mapping, video contracting are carried out to inputting video data in conjunction with reference speckle image
It puts and Data Format Transform, obtains initial video data stream;
The frame is inserted into the data flow for carrying RGB information and corresponds to frame by S400, each frame to initial video data stream
Behind, obtained data flow is then subjected to compression and compressed data stream transmitting to terminal is subjected to three-dimensional imaging;
Wherein:
Depth-the grey scale mapping is that calculated depth information is mapped as gray scale according to depth-grey scale mapping relationship
Value indicates the range information of projecting space and target object using depth map;
The video scaling is to zoom in and out in proportion to depth map;
Format conversion carries out Data Format Transform for the depth map after scaling, to be suitble to pass in DTN network
It is defeated.
In the method, in which: described image sensor is exported inputting video data stream by MIPI or parallel port.
In the method, in which: the transmission in the step S400 is using RJ45 or WIFI as medium.
In the method, in which: be transmitted through following methods realization in the step S400:
At interval of K data packet, 1 coding packet is generated, intelligent terminal is transmitted to by RJ45 or WIFI;
Wherein K is setting value.
In the method, in which: in the step S400 before carrying out three-dimensional imaging, received data flow is solved
Pressure, format conversion, depth-gray scale inverse mapping obtain depth information and RGB information.
Compared with prior art, the disclosure have it is following the utility model has the advantages that
Disclosure device or method can quickly and accurately obtain target object depth information, and can it is convenient, in real time into
Row three-dimensional imaging.
Detailed description of the invention
Fig. 1 is the structural block diagram of the depth network camera device of the invention based on RGB-D;
Fig. 2 is the method flow diagram of the embodiment of the present invention.
Specific embodiment
In one embodiment, present disclose provides a kind of depth network camera device based on RGB-D, as shown in Figure 1,
Described device includes the speckle encoding projector, imaging sensor, depth calculation module, network transmission module, terminal;The speckle
The projector is encoded, is used for target object or scene simulation speckle pattern;Described image sensor, for acquiring by target object
Or the speckle pattern of scene reflectivity, speckle encoding image is obtained, forms inputting video data stream, and by the inputting video data
Depth calculation module is given in stream output;The depth calculation module carries out inputting video data with reference to speckle image for combining
Depth calculation, depth-grey scale mapping, video scaling and Data Format Transform obtain initial video data stream;The network transmission
The frame is inserted into after corresponding to frame in the data flow for carrying RGB information by module for each frame to initial video data stream
Then obtained data flow is carried out compression and compressed data stream transmitting to terminal is carried out three-dimensional imaging by face;The depth
Degree-grey scale mapping is that calculated depth information is mapped as gray value according to depth-grey scale mapping relationship, utilizes depth chart
Show the range information of projecting space and target object;The video scaling is to zoom in and out in proportion to depth map;The format
Conversion carries out Data Format Transform for the depth map after scaling, to be suitble to transmit in DTN network.
In this embodiment, the speckle encoding projector allows power drives to work normally by powering on, and compiles speckle
The code projector issues feux rouges.The speckle encoding projector projects the laser beam after collimation, is carried out by diffractive-optical element (DOE) scattered
It penetrates, and then to target object or scene simulation speckle pattern, obtains required speckle pattern.
In this embodiment, the speckle image that imaging sensor projects the speckle projector receives, imaging sensor
The speckle pattern received is switched to electric signal by optical signal by internal photosensitive element, is then converted to spy by internal processor
The data for the formula that fixes spread depth calculation module as inputting video data.In said device, in which: described image passes
Inputting video data stream is exported by MIPI or parallel port and gives depth calculation module by sensor.
In this embodiment, network transmission module is by initial video data stream and RGB data stream according to compression of images
Coding standard carries out joint compression, is inserted into the corresponding depth data of a frame after the first frame data of RGB data stream first,
And be next frame data of RGB data stream, the compressed encoding of data frame by frame is carried out by this method, it then will be after joint compression
Data flow according to the improved LTP network transmission protocol of the present invention, (present invention utilizes linear network encoding algorithm, proposes a kind of tool
There is low complex degree, be suitable for space DTN network code LTP transport protocol improvement strategy, which can effectively improve system throughput
Amount, reduces the number of repeat request.The efficiency of transmission for being transferred to intelligent terminal of compressed data flow can effectively be improved.)
Intelligent terminal is transmitted to using RJ45 or WIFI as medium.In said device, in which: the network transmission module is carrying out solid
Before imaging, received data flow is decompressed, format is converted, depth-gray scale inverse mapping obtains depth information and RGB information.
The depth network shooting method based on RGB-D that another embodiment provides a kind of, as shown in Fig. 2, described
Method includes the following steps:
S100, by the speckle encoding projector in target object or scene simulation speckle pattern;
S200, it is acquired by imaging sensor by the speckle pattern of target object or scene reflectivity, obtains speckle encoding figure
Picture forms inputting video data stream;
S300, depth calculation, depth-grey scale mapping, video contracting are carried out to inputting video data in conjunction with reference speckle image
It puts and Data Format Transform, obtains initial video data stream;
The frame is inserted into the data flow for carrying RGB information and corresponds to frame by S400, each frame to initial video data stream
Behind, obtained data flow is then subjected to compression and compressed data stream transmitting to terminal is subjected to three-dimensional imaging;
Wherein:
Depth-the grey scale mapping is that calculated depth information is mapped as gray scale according to depth-grey scale mapping relationship
Value indicates the range information of projecting space and target object using depth map;The video scaling be to depth map in proportion into
Row scaling;Format conversion carries out Data Format Transform for the depth map after scaling, to be suitble to pass in DTN network
It is defeated.
In the method, in which: described image sensor is exported inputting video data stream by MIPI or parallel port.
In the method, in which: the transmission in the step S400 is using RJ45 or WIFI as medium.
In the method, in which: be transmitted through following methods realization in the step S400:
At interval of K data packet, 1 coding packet is generated, intelligent terminal is transmitted to by RJ45 or WIFI;Wherein K is to set
Definite value.
In the method, in which: in the step S400 before carrying out three-dimensional imaging, received data flow is solved
Pressure, format conversion, depth-gray scale inverse mapping obtain depth information and RGB information.
In the device or method of any one of aforementioned schemes/feature/aspect, the depth calculation includes the following steps:
Code pattern is pre-processed;
A pixel is obtained from code pattern, characteristic block will be obtained centered on the pixel, and in reference speckle with reference in figure
It scans for, the match block to match with characteristic block is obtained according to similarity criteria;
The shift offset for obtaining characteristic block and match block, is denoted as Δ m for the shift offset;
According to shift offset Am, in conjunction with known reference speckle distance parameter d, the laser speckle projector and IR camera base
Linear distance s, TR camera focal length f, pixel point obtain distance of the pixel in real space according to following formula away from μ
Information d ', and then obtain depth map;:
In the device or method of any one of aforementioned schemes/feature/aspect, the depth-grey scale mapping relationship is non-
Linear relationship, depth distance is closer, and gray value is bigger, and the remoter gray value of depth distance is smaller.
It is described that depth map is zoomed in and out in proportion in the device or method of any one of aforementioned schemes/feature/aspect
Include the following steps:
S301, zoom control parameter is received in a manner of iic bus, what controlled level diminution, vertically scale and level were amplified
Scaling, and control the generation of the row field sync signal after scaling;
S302, the row field sync signal after scaling is generated according to the row field sync signal combination zoom control parameter of input;
S303, horizontal diminution is carried out to depth map by bilinearity or bicubic interpolation mode;
S304, vertically aligned multiple register groups are provided simultaneously, is reduced according to the register group to through level
Depth map zoomed in and out in vertical direction;
S305, horizontal amplification is carried out to the depth map through vertically scale by bilinearity or bicubic interpolation mode.
In the device or method of any one of aforementioned schemes/feature/aspect, the speckle pattern meets horizontal or vertical
Histogram to a certain range in, feature does not repeat or belongs to random distribution.
In the device or method of any one of aforementioned schemes/feature/aspect, described image sensor includes filtering dress
It sets, for filtering out the light outside receiver wavelength range.
Although embodiment of the present invention is described in conjunction with attached drawing above, the invention is not limited to above-mentioned
Specific embodiments and applications field, above-mentioned specific embodiment are only schematical, directiveness, rather than restricted
's.Those skilled in the art are under the enlightenment of this specification and in the range for not departing from the claims in the present invention and being protected
In the case where, a variety of forms can also be made, these belong to the column of protection of the invention.
Claims (10)
1. a kind of depth network camera device based on RGB-D, it is characterised in that:
Described device includes the speckle encoding projector, imaging sensor, depth calculation module, network transmission module, terminal;
The speckle encoding projector is used for target object or scene simulation speckle pattern;
Described image sensor obtains speckle encoding image, shape for acquiring the speckle pattern by target object or scene reflectivity
At inputting video data stream, and the inputting video data stream is exported and gives depth calculation module;
The depth calculation module carries out depth calculation, depth-gray scale to inputting video data with reference to speckle image for combining
Mapping, video scaling and Data Format Transform obtain initial video data stream;
The frame is inserted into the number for carrying RGB information for each frame to initial video data stream by the network transmission module
According to being corresponded to behind frame in stream, obtained data flow is then subjected to compression and carries out compressed data stream transmitting to terminal
Three-dimensional imaging;
Depth-the grey scale mapping is that calculated depth information is mapped as gray value according to depth-grey scale mapping relationship, benefit
The range information of projecting space and target object is indicated with depth map;
The video scaling is to zoom in and out in proportion to depth map;
Format conversion carries out Data Format Transform for the depth map after scaling, to be suitble to transmit in DTN network.
2. the apparatus according to claim 1, it is characterised in that: it is preferred,
Inputting video data stream is exported by MIPI or parallel port and gives depth calculation module by described image sensor.
3. the apparatus according to claim 1, it is characterised in that:
The network transmission module is transmitted using RJ45 or WIFI as medium.
4. the apparatus according to claim 1, which is characterized in that be transmitted through following methods in the network transmission module
It realizes:
At interval of K data packet, 1 coding packet is generated, intelligent terminal is transmitted to by RJ45 or WIFI;
Wherein K is setting value.
5. the apparatus according to claim 1, which is characterized in that the network transmission module, will before carrying out three-dimensional imaging
Received data flow is decompressed, format is converted, depth-gray scale inverse mapping obtains depth information and RGB information.
6. a kind of depth network shooting method based on RGB-D, which is characterized in that the method includes the following steps:
S100, by the speckle encoding projector in target object or scene simulation speckle pattern;
S200, it is acquired by imaging sensor by the speckle pattern of target object or scene reflectivity, obtains speckle encoding image, shape
At inputting video data stream;
S300, in conjunction with reference speckle image to inputting video data carry out depth calculation, depth-grey scale mapping, video scaling and
Data Format Transform obtains initial video data stream;
The frame is inserted into after corresponding to frame in the data flow for carrying RGB information by S400, each frame to initial video data stream
Then obtained data flow is carried out compression and compressed data stream transmitting to terminal is carried out three-dimensional imaging by face;
Wherein:
Depth-the grey scale mapping is that calculated depth information is mapped as gray value according to depth-grey scale mapping relationship, benefit
The range information of projecting space and target object is indicated with depth map;
The video scaling is to zoom in and out in proportion to depth map;
Format conversion carries out Data Format Transform for the depth map after scaling, to be suitble to transmit in DTN network.
7. according to the method described in claim 6, it is characterized in that, described image sensor is regarded input by MIPI or parallel port
Frequency data stream output.
8. according to the method described in claim 6, it is characterized in that, the transmission in the step S400 is with RJ45 or WIFI
Medium.
9. according to the method described in claim 6, it is characterized in that, being transmitted through following methods reality in the step S400
It is existing:
At interval of K data packet, 1 coding packet is generated, intelligent terminal is transmitted to by RJ45 or WIFI;
Wherein K is setting value.
10. according to the method described in claim 6, it is characterized in that, will be connect before carrying out three-dimensional imaging in the step S400
The data flow of receipts is decompressed, format is converted, depth-gray scale inverse mapping obtains depth information and RGB information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711414598.9A CN109963135A (en) | 2017-12-22 | 2017-12-22 | A kind of depth network camera device and method based on RGB-D |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711414598.9A CN109963135A (en) | 2017-12-22 | 2017-12-22 | A kind of depth network camera device and method based on RGB-D |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109963135A true CN109963135A (en) | 2019-07-02 |
Family
ID=67020307
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711414598.9A Pending CN109963135A (en) | 2017-12-22 | 2017-12-22 | A kind of depth network camera device and method based on RGB-D |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109963135A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111815695A (en) * | 2020-07-09 | 2020-10-23 | Oppo广东移动通信有限公司 | Depth image acquisition method and device, mobile terminal and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102057365A (en) * | 2008-07-09 | 2011-05-11 | 普莱姆森斯有限公司 | Integrated processor for 3D mapping |
US20110317912A1 (en) * | 2010-06-25 | 2011-12-29 | Samsung Electronics Co., Ltd. | Method, apparatus and computer-readable medium coding and decoding depth image using color image |
CN102710950A (en) * | 2012-05-31 | 2012-10-03 | 哈尔滨工业大学 | System and method for transmitting 3D (Three-dimensional) video by one-way television signal |
CN102769749A (en) * | 2012-06-29 | 2012-11-07 | 宁波大学 | Post-processing method for depth image |
CN102970554A (en) * | 2011-08-30 | 2013-03-13 | 奇景光电股份有限公司 | System and method of handling data frames for stereoscopic display |
CN103650515A (en) * | 2011-07-14 | 2014-03-19 | 高通股份有限公司 | Wireless 3D streaming server |
CN103841406A (en) * | 2014-02-13 | 2014-06-04 | 西安交通大学 | Plug and play depth photographic device |
US20140286439A1 (en) * | 2011-09-30 | 2014-09-25 | Lg Innotek Co., Ltd. | Apparatus for transmitting image data |
CN105120257A (en) * | 2015-08-18 | 2015-12-02 | 宁波盈芯信息科技有限公司 | Vertical depth sensing device based on structured light coding |
CN105337708A (en) * | 2015-09-18 | 2016-02-17 | 哈尔滨工业大学深圳研究生院 | DTN network data transmission method using bundle block aggregation on dual-hop asymmetric channel |
CN205657802U (en) * | 2016-04-21 | 2016-10-19 | 宁波盈芯信息科技有限公司 | Three -dimensional degree of depth perception equipment |
CN106454204A (en) * | 2016-10-18 | 2017-02-22 | 四川大学 | Naked eye stereo video conference system based on network depth camera |
-
2017
- 2017-12-22 CN CN201711414598.9A patent/CN109963135A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102057365A (en) * | 2008-07-09 | 2011-05-11 | 普莱姆森斯有限公司 | Integrated processor for 3D mapping |
US20110317912A1 (en) * | 2010-06-25 | 2011-12-29 | Samsung Electronics Co., Ltd. | Method, apparatus and computer-readable medium coding and decoding depth image using color image |
CN103650515A (en) * | 2011-07-14 | 2014-03-19 | 高通股份有限公司 | Wireless 3D streaming server |
CN102970554A (en) * | 2011-08-30 | 2013-03-13 | 奇景光电股份有限公司 | System and method of handling data frames for stereoscopic display |
US20140286439A1 (en) * | 2011-09-30 | 2014-09-25 | Lg Innotek Co., Ltd. | Apparatus for transmitting image data |
CN102710950A (en) * | 2012-05-31 | 2012-10-03 | 哈尔滨工业大学 | System and method for transmitting 3D (Three-dimensional) video by one-way television signal |
CN102769749A (en) * | 2012-06-29 | 2012-11-07 | 宁波大学 | Post-processing method for depth image |
CN103841406A (en) * | 2014-02-13 | 2014-06-04 | 西安交通大学 | Plug and play depth photographic device |
CN105120257A (en) * | 2015-08-18 | 2015-12-02 | 宁波盈芯信息科技有限公司 | Vertical depth sensing device based on structured light coding |
CN105337708A (en) * | 2015-09-18 | 2016-02-17 | 哈尔滨工业大学深圳研究生院 | DTN network data transmission method using bundle block aggregation on dual-hop asymmetric channel |
CN205657802U (en) * | 2016-04-21 | 2016-10-19 | 宁波盈芯信息科技有限公司 | Three -dimensional degree of depth perception equipment |
CN106454204A (en) * | 2016-10-18 | 2017-02-22 | 四川大学 | Naked eye stereo video conference system based on network depth camera |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111815695A (en) * | 2020-07-09 | 2020-10-23 | Oppo广东移动通信有限公司 | Depth image acquisition method and device, mobile terminal and storage medium |
CN111815695B (en) * | 2020-07-09 | 2024-03-15 | Oppo广东移动通信有限公司 | Depth image acquisition method and device, mobile terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106170086B (en) | Method and device thereof, the system of drawing three-dimensional image | |
CN106683163B (en) | Imaging method and system for video monitoring | |
JP5708051B2 (en) | Video processing apparatus, video processing system, video conference system, remote monitoring system, video processing method, and imaging apparatus | |
CN108712400B (en) | Data transmission method and device, computer readable storage medium and electronic equipment | |
CN104205822A (en) | Method of 3D reconstruction of a scene calling upon asynchronous sensors | |
CN105959562A (en) | Method and device for obtaining panoramic photographing data and portable panoramic photographing equipment | |
CN116612374A (en) | Structured light-based human face living body detection model safety test method and system | |
CN109963135A (en) | A kind of depth network camera device and method based on RGB-D | |
CN111541886A (en) | Vision enhancement system applied to muddy underwater | |
CN116250242A (en) | Shooting device, server device and 3D data generation method | |
EP2512142A2 (en) | Apparatus and method for extracting a texture image and a depth image | |
US10922839B2 (en) | Location obtaining system, location obtaining device, location obtaining method, and non-transitory computer-readable recording medium | |
CN111669482A (en) | Image processing method, system, medium, chip and structural optoelectronic device | |
CN110823904A (en) | Hydraulic engineering crack extraction method | |
JP6004978B2 (en) | Subject image extraction device and subject image extraction / synthesis device | |
JP5864371B2 (en) | Still image automatic generation system, worker information processing terminal, instructor information processing terminal, and determination device in still image automatic generation system | |
JP2005142765A (en) | Apparatus and method for imaging | |
CN105955058A (en) | Wireless intelligent household system | |
CN113301321A (en) | Imaging method, system, device, electronic equipment and readable storage medium | |
CN106791888A (en) | The transmission method and device of the panoramic pictures based on user perspective | |
JP2002164851A (en) | System and method for communication between embedded devices using visible image | |
KR102032805B1 (en) | The apparatus and method of intermediate-point video conferencing using multiple cameras | |
CN111953956B (en) | Naked eye three-dimensional special-shaped image three-dimensional camera generation system and method thereof | |
US20230360318A1 (en) | Virtual view generation | |
JP2006228144A (en) | Image processing system, image processor, computer program and method for calculating image delay |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190702 |
|
RJ01 | Rejection of invention patent application after publication |