CN106446883B - Scene reconstruction method based on optical label - Google Patents
Scene reconstruction method based on optical label Download PDFInfo
- Publication number
- CN106446883B CN106446883B CN201610789231.4A CN201610789231A CN106446883B CN 106446883 B CN106446883 B CN 106446883B CN 201610789231 A CN201610789231 A CN 201610789231A CN 106446883 B CN106446883 B CN 106446883B
- Authority
- CN
- China
- Prior art keywords
- optical label
- image
- background image
- scene
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
Abstract
A kind of scene reconstruction method based on optical label of the present invention, can strengthen and protrude transmitting information, exclude the interference of other irrelevant informations, guarantee the Accuracy and high efficiency of information transmitting.It includes the following steps, step 1, within the unit time, identifies that equipment carries out optical label scanning by optical label, finds the optical label in scene, and pickup light label image;The positioning identifier coordinate position of optical label is mapped in optical label image from frame image, as anchor point;Step 2, acquire optical label background image in the scene, the background image includes all image-regions in scene in addition to anchor point;Step 3, simplified picture size scaling, image information, image border enhancing processing and image rendering, the background image reconstructed are successively carried out to background image;Step 4, the background image of reconstruct and optical label image are subjected to fusion superposition, obtain reconstruct scene and is sent to the scene reconstruction completed in the unit time in VR display equipment.
Description
Technical field
The present invention relates to the methods of scene reconstruction, specially the scene reconstruction method based on optical label.
Background technique
Optical label, can be as VR (virtual reality) acquisition equipment in entity space due to its unique information transmission characteristic
In anchor point, spatial position is demarcated.Based on these optical label anchor points, hardware and software device can will be in entity space
Object projects in the space VR according to actual position relationship, to establish the mapping of the two.Mapping based on both spaces, then
More advanced applications can be implemented, such as entity interaction access, space orientation and roaming, be associated with and calculated etc..However, only
Anchor point establishes the mapping relations in space not enough, because other than anchor point, the blank between anchor point is also in entire Virtual Space
It needs to be filled, to be more advantageous to interpretation, understanding of the user to the space VR, the image filled also needs to reach prominent, strong
Change the purpose of optical label.Herein, solve the problems, such as that one of the method that can be used is exactly that the background image of entity space is synchronous
It collects, is filled into after processing in VR scene.Since not all background image information is all related to the application of optical label,
Therefore it just needs to carry out proper treatment to these Backgrounds, emphasizes, renders and the simplification of irrelevant information etc. including important feature, then
It is overlapped display with optical label image, completes the reconstruct to scene, there is presently no relevant optical label scene reconstruction methods.
Summary of the invention
Aiming at the problems existing in the prior art, the present invention provides a kind of scene reconstruction method based on optical label, can
Optical label transmitting information is strengthened and is protruded, the interference of other irrelevant informations is excluded, guarantee information transmitting accuracy and
High efficiency.
The present invention is to be achieved through the following technical solutions:
Scene reconstruction method based on optical label, includes the following steps,
Step 1, it within the unit time, identifies that equipment carries out optical label scanning by optical label, finds the cursor in scene
Label, and pickup light label image;The positioning identifier coordinate position of optical label is mapped in optical label image from frame image, as
Anchor point;
Step 2, acquisition optical label institute background image in the scene, the background image include that anchor point is removed in scene
All image-regions in addition;
Step 3, background image is successively carried out picture size scaling, image information be simplified, at the enhancing of image border
Reason and image rendering, the background image reconstructed;
Step 4, the background image of reconstruct and optical label image are subjected to fusion superposition, obtaining reconstruct scene, to be sent to VR aobvious
Show the scene reconstruction completed in the unit time in equipment.
Preferably, specific step is as follows for step 1,
Optical label identifies equipment continuous acquisition multiframe scene image, and shoots the figure that a width includes the optical label of anchor point
Picture;Adjacent any two frame is made the difference and gets differential chart, the positioning identifier of optical label is found in differential chart, obtains optical label
Coordinate position in optical label image;To which the positioning identifier coordinate position that shines in optical label is mapped to cursor from frame image
It signs in image, obtains the coordinate position of optical label and the anchor point of optical label picture registration;Wherein, when acquisition between consecutive frame
Between the time interval that is flashed twice not less than optical label dynamic positioning identifier.
Further, when luminous positioning identifier actual in the luminous positioning identifier coordinate and optical label image that mapping obtains is sat
When mark is not fully overlapped;Firstly, obtaining optical label in optical label figure according to the positional relationship of optical label and luminous positioning identifier
Position range and scene areas as in;Secondly, in the scene areas comprising optical label, to the positioning identifier in multiple image
It accords with and scans for identifying, respectively obtain the corresponding coordinate of positioning identifier in multiple image, and then obtain the equal of all respective coordinates
It is worth the coincidence point of the positioning identifier and optical label image as optical label, i.e. anchor point.
Preferably, specific step is as follows for step 2, is carried out using the time slot of non-optical label acquisition in optical label identification equipment
The acquisition of background image.
Preferably, specific step is as follows for step 3,
Step 3.1, picture size scaling;By the display in the background image display scale of entity space and the space VR
Ratio is consistent, obtains scaling background image;
Step 3.2, image information simplifies;Scaling background image is subjected to gray processing processing, obtains gray scale pretreatment background
Image;It is reflected using RGB color component, heat, intensity of illumination or energy, removes figure unrelated in gray scale pretreatment background image
Picture information realization simplifies, and obtains gray scale background image;
Step 3.3, image border enhancing is handled;To gray scale background image I2It carries out high-pass filtering and obtains enhancing pretreatment back
Scape image I2', enhancing background image I in edge is then obtained by following formula3,
I3=I2+k·I2';
Wherein, k is adjustment factor;
Step 3.4, image rendering;Content acquisition is carried out to optical label information by optical label coding/decoding method, according to cursor
The corresponding rendering feature of the frame information content stored in label information and rgb value and rendering striped, using the position of optical label as the center of circle,
Enhance background image to edge as radius using adjustable pixel digit r to render, the background image after being reconstructed.
Further, in step 3.2, simplification is removed to irrelevant information using RGB color component in gray scale background image
When, using any one in following three kinds of methods,
Maximum value process, R=G=B=Max (R, G, B);
Mean value method, R=G=B=(R+G+B)/3;
Weighted average method, R=G=B=wr·R+wg·G+wb·B;
Wherein, R, G, B are the RGB color component value of any pixel of gray scale pretreatment background image;wr、wg、wbRespectively
R, the weight of G, B, wr、wg、wbIn [0,1] section.
Further, in step 3.4, coding/decoding method indicates coding by the algorithm flag bit in the signal element of optical label, and
By encoding the decoding algorithm database acquisition in standard.
Compared with prior art, the invention has the following beneficial technical effects:
The present invention is when carrying out the mapping of the entity space based on optical label and the space VR, using optical label as positioning " anchor
Point ", the position for forming above two space accurately correspond to, while also needing especially to locate the image information of reality scene
Reason, reconstruct scene, form the display in the complete space VR and prominent optical label;By dividing background image and cursor glyph
From, and processing is reconstructed to background image, it solves enhancing optical label in the picture and shows, while proposing and optical label correlation
Not high background information, so that user focuses more on the display of optical label.
Detailed description of the invention
Optical label sample described in Fig. 1 present example.
Scene reconstruction schematic diagram described in Fig. 2 present example.
Image schematic diagram after Information Simplification described in Fig. 3 present example.
Virtual Space Scene after reconstruct described in Fig. 4 present example.
The present invention is based on the scene reconstruction method flow diagrams of optical label by Fig. 5.
In figure: 1 identifies equipment for optical label, and 2 be real scene locating for optical label, and 3 be optical label, and 4 is in scenes
Background object, 5 be scene reconstruction processing server, and 6 be reconstruct image of the real scene in Virtual Space.
Specific embodiment
Below with reference to specific embodiment, the present invention is described in further detail, it is described be explanation of the invention and
It is not to limit.
The present invention is when carrying out the mapping of the entity space based on optical label and the space VR, using optical label as positioning " anchor
Point ", the position for forming above two space accurately correspond to, while also needing especially to locate the image information of reality scene
Reason, reconstruct scene, form the display in the complete space VR and prominent optical label.
The optical label sample is as shown in Figure 1.Optical label include signal element (cell) group (or referred to as " data
Position ") and positioning identifier (or referred to as " flag bit ") two parts, wherein positioning identifier is the biggish rectangle frame (three of three, upper figure
This rectangle frame is known as " one group of position identifiers "), positioning identifier is flashed with certain Frequency Synchronization under working condition, passes through figure
The method of aberration point can be obtained quickly and be detected by picture pick-up device, and then the position of signal element can be determined by positioning identifier
It sets, to carry out data identification and read;Black and white rectangle of the signal element between positioning identifier, multiple signal elements constitute one
Group, usual signal element form the array of 5 × 5 (being not limited to), and each signal element indicates " 0 " or " 1 " of digital signal, entirely
(side length of marker is the two of data bit side length to the digital signal sequences of matrix one frame of composition of signal element group composition here
Times, easily facilitate positioning), in order to increase the data space that signal element indicates, each signal element can also be according under working condition
Scheduled program is flashed, so that more signal contents be shown by multiframe.At this moment it needs to provide one in multiframe
Start frame/end of identification frame, for demarcating the beginning/end position of one complete cycle of multiframe, frame signal unit group setting
For a special data combination, such as: full 0 or the complete 1 or any not specific combination different with the information of actual capabilities statement.
The present invention is based on the scene reconstruction of optical label is as shown in Figure 2.Scene reconstruction method based on optical label is with a list
The position time is a period, and circulation executes, as shown in figure 5, its process is:
Step 1: carrying out optical label scanning, finds the optical label in scene;The method of above-mentioned optical label scanning is as follows:
Optical label identifies 1 continuous acquisition multiframe label image of equipment, is denoted as: f0, f1..., fm, acquisition between consecutive frame
Time is not less than the time interval that dynamic positioning identifier flashes twice;And then, a width optical label image is shot, p is denoted as;It is right
Adjacent any two frame, which makes the difference, gets differential chart, and the positioning identifier of optical label 3 is found in differential chart, further obtain its
Coordinate position in optical label image;The position coordinates of luminescence unit are mapped in optical label image from frame image;Due to can
There can be the reasons such as hand shaking, actual luminous positioning identifier in the luminous positioning identifier coordinate and optical label image p mapped
Coordinate is not fully overlapped, and at this moment according to the positional relationship of optical label and luminous positioning identifier, optical label can be obtained in optical label
Position range and scene areas in image, in the scene areas comprising optical label, to position identifiers in multiple image into
Row search identification, respectively obtains the coordinate of positioning identifier in multiple image, calculates the mean value of these coordinates as final cursor
The positioning identifier of label and the coincidence point of optical label image p, i.e. anchor point.
Step 2: it all image-regions will regard as being background image in addition to anchor point;Acquire optical label institute in the scene
Background image, due to optical label identification equipment camera shooting or photographing device be not all to be used for light in all Image Acquisition time slots
The data of label acquire, therefore the acquisition of background image is carried out using the time slot of non-optical label acquisition;In unit interval T
It is interior, T=[t1,m1,t2,m2,t3,m3,t4,m4,t5,m5,t6...], wherein tiIt is optical label data acquisition time slot, i is positive integer,
mjTime slot is acquired for non-optical label, j is positive integer, then indicates in the background image that above-mentioned non-optical label data acquisition time slot obtains
For I, camera shooting or photographing device need to be adopted as to the shutter speed of a upper time slot when acquiring I;
Step 3: processing reconstruct is carried out to background image, the method for above-mentioned image reconstruction is as follows:
Dimension scale scaling is carried out to background image first, so that the background image display scale of entity space and the space VR
In display scale it is consistent, note source background image be I0, then the scaling background image after scaling is I1, image is carried out herein
The method of scaling can be arbitrary image scaling method;
Optical label scene information simplifies, and carries out gray processing processing to Background herein, obtains gray scale pretreatment background image,
It removes image information unrelated in gray scale pretreatment background image and realizes and simplify, obtain gray scale background image I2, may be selected following
Method one carries out the processing, and three kinds of methods are as follows:
● maximum value process: R=G=B=Max (R, G, B);
● mean value method: R=G=B=(R+G+B)/3;
● weighted average method: R=G=B=wr·R+wg·G+wbB,
Herein, R, G, B are any pixel RGB color component value of gray scale pretreatment background image, wr、wg、wbRespectively R,
G, the weight of B, wr、wg、wbIn [0,1] section, Max is maximum value value finding function, finally obtains gray scale background image I2, this
Place can also carry out image according to other different quantizating index simplifying processing, such as: heat, intensity of illumination, energy reflection,
Such as Fig. 3, and it is not limited to RGB color component;
Optical label scene information is strengthened, herein to gray scale background image I2Edge enhancing processing is carried out, specifically to ash
Spend background image I2High-pass filtering is carried out, enhancing pretreatment background image I is obtained2', it is calculated according to the following formula:
I3=I2+k·I2';
In above formula, k is adjustment factor, then obtains enhancing background image I by edge3;
The rendering of optical label scene information is carried out, content acquisition is carried out to optical label information by optical label coding/decoding method, it should
Coding/decoding method indicates coding by algorithm flag bit in the signal element of optical label, and by encoding the decoding algorithm data in standard
It is obtained in library;If the information for obtaining the current cursor label frames is m, corresponding color is inquired in colors list below:
The information content | Rgb value | Texturing patterns |
m | (R、G、B) | T |
Herein, T is texturing patterns, be can be used any from the realization of third-party resource;The list is to preset;With light
The position of label is the center of circle, according to frame information content above and rgb value and renders striped by radius of adjustable pixel digit r
Corresponding rendering feature enhances background image I to edge3It is rendered, r > 0;Background by above-mentioned processing, after being reconstructed
Image I4, background image I after reconstruct4It eliminates the redundant image information unrelated with optical label and enhances around optical label
Image is shown;
Step 4: the background image after reconstruct is overlapped with optical label image, is submitted to VR and is shown equipment, finishes,
The above-mentioned processing for carrying out next time cycle if necessary finally obtains the processing result of similar Fig. 4.
The present invention realizes enhancing optical label in the picture and shows, while proposing to believe with the not high background of optical label correlation
Breath, so that user focuses more on the display of optical label.
In actual use, following scene and function can be realized.User U is by optical label technology roaming access commercial street
Road needs in order to enable highlighting in the VR equipment (Google glasses) that optical label is used at it to scene reconstruction.It uses first
Family U uses image in face of image capture device acquisition;By the way that there are two optical labels in face of optical label scanning discovery, one is identified respectively
Family restaurant and a hotel;After identification, this method is utilized respectively different time-gap and has collected optical label encoded information and worked as front court
Scape information;Scene information is reconstructed, the optical label ambient background for being directed toward hotel is subjected to lines enhancing, user is allowed to see pure mellow wine
The profile and scale in shop, and by hotel's color rendering at soft light green;The optical label ambient background for being directed toward restaurant is carried out
Lines reduction, allows user to see the style inside dining room clearly, and by color rendering at bright-coloured yellow;In above-mentioned background reconfiguration method
With the help of, user U is more easier to be concerned about corresponding optical label, and enjoys service provided by it, eliminates other unrelated
Information unduly interferes with.
Claims (6)
1. the scene reconstruction method based on optical label, which is characterized in that include the following steps,
Step 1, it within the unit time, identifies that equipment carries out optical label scanning by optical label, finds the optical label in scene, and
Pickup light label image;The positioning identifier coordinate position of optical label is mapped in optical label image from frame image, as positioning
Point;
Step 2, acquire optical label background image in the scene, the background image include in scene in addition to anchor point
All image-regions;
Step 3, background image is successively carried out picture size scaling, image information simplified, image border enhancing processing and
Image rendering, the background image reconstructed;Specific step is as follows,
Step 3.1, picture size scaling;By the display scale in the background image display scale of entity space and the space VR
Unanimously, scaling background image is obtained;
Step 3.2, image information simplifies;Scaling background image is subjected to gray processing processing, obtains gray scale pretreatment background image;
It is reflected using RGB color component, heat, intensity of illumination or energy, removes image letter unrelated in gray scale pretreatment background image
Breath, which is realized, to be simplified, and gray scale background image is obtained;
Step 3.3, image border enhancing is handled;To gray scale background image I2It carries out high-pass filtering and obtains enhancing pretreatment Background
As I2', enhancing background image I in edge is then obtained by following formula3,
I3=I2+k·I2';
Wherein, k is adjustment factor;
Step 3.4, image rendering;Content acquisition is carried out to optical label information by optical label coding/decoding method, is believed according to optical label
The frame information content stored in breath and rgb value and the corresponding rendering feature for rendering striped, using the position of optical label as the center of circle, with can
The pixel digit r of tune is that radius renders edge enhancing background image, the background image after being reconstructed;
Step 4, the background image of reconstruct and optical label image are subjected to fusion superposition, obtain reconstruct scene and be sent to VR and show to set
Scene reconstruction in standby middle completion unit time.
2. the scene reconstruction method according to claim 1 based on optical label, which is characterized in that the specific steps of step 1
It is as follows,
Optical label identifies equipment continuous acquisition multiframe scene image, and shoots the image that a width includes the optical label of anchor point;It is right
Adjacent any two frame, which makes the difference, gets differential chart, and the positioning identifier of optical label is found in differential chart, obtains optical label in light
Coordinate position in label image;To which the positioning identifier coordinate position that shines in optical label is mapped to optical label figure from frame image
As in, the coordinate position of optical label and the anchor point of optical label picture registration are obtained;Wherein, the acquisition time between consecutive frame is not
The time interval flashed twice less than optical label dynamic positioning identifier.
3. the scene reconstruction method according to claim 2 based on optical label, which is characterized in that when what mapping obtained shines
When actual luminous positioning identifier coordinate is not fully overlapped in positioning identifier coordinate and optical label image;Firstly, according to cursor
The positional relationship of label and luminous positioning identifier, obtains position range and scene areas of the optical label in optical label image;Secondly,
In the scene areas comprising optical label, the position identifiers in multiple image is scanned for identifying, respectively obtains multiframe figure
The corresponding coordinate of positioning identifier as in, and then obtain positioning identifier and optical label of the mean value as optical label of all respective coordinates
The coincidence point of image, i.e. anchor point.
4. the scene reconstruction method according to claim 1 based on optical label, which is characterized in that the specific steps of step 2
It is as follows, the acquisition of background image is carried out using the time slot of non-optical label acquisition in optical label identification equipment.
5. the scene reconstruction method according to claim 1 based on optical label, which is characterized in that in step 3.2, utilize ash
RGB color component is when being removed simplified to irrelevant information in degree background image, using any one in following three kinds of methods,
Maximum value process, R=G=B=Max (R, G, B);
Mean value method, R=G=B=(R+G+B)/3;
Weighted average method, R=G=B=wr·R+wg·G+wb·B;
Wherein, R, G, B are the RGB color component value of any pixel of gray scale pretreatment background image;wr、wg、wbRespectively R, G,
The weight of B, wr、wg、wbIn [0,1] section.
6. the scene reconstruction method according to claim 1 based on optical label, which is characterized in that in step 3.4, decoding side
Method indicates coding by the algorithm flag bit in the signal element of optical label, and the decoding algorithm database by coding in standard obtains
?.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610789231.4A CN106446883B (en) | 2016-08-30 | 2016-08-30 | Scene reconstruction method based on optical label |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610789231.4A CN106446883B (en) | 2016-08-30 | 2016-08-30 | Scene reconstruction method based on optical label |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106446883A CN106446883A (en) | 2017-02-22 |
CN106446883B true CN106446883B (en) | 2019-06-18 |
Family
ID=58163637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610789231.4A Active CN106446883B (en) | 2016-08-30 | 2016-08-30 | Scene reconstruction method based on optical label |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106446883B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886017B (en) * | 2017-11-09 | 2021-02-19 | 陕西外号信息技术有限公司 | Method and device for reading optical label sequence |
CN110471580B (en) | 2018-05-09 | 2021-06-15 | 北京外号信息技术有限公司 | Information equipment interaction method and system based on optical labels |
CN109710198B (en) * | 2018-12-29 | 2020-12-25 | 森大(深圳)技术有限公司 | Printing method, device and equipment for local dynamic variable image |
CN111754449A (en) * | 2019-03-27 | 2020-10-09 | 北京外号信息技术有限公司 | Scene reconstruction method based on optical communication device and corresponding electronic equipment |
CN112561952A (en) * | 2019-09-26 | 2021-03-26 | 北京外号信息技术有限公司 | Method and system for setting renderable virtual objects for a target |
TWI785332B (en) * | 2020-05-14 | 2022-12-01 | 光時代科技有限公司 | Three-dimensional reconstruction system based on optical label |
CN113469901A (en) * | 2021-06-09 | 2021-10-01 | 丰疆智能科技股份有限公司 | Positioning device based on passive infrared tag |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101208723A (en) * | 2005-02-23 | 2008-06-25 | 克雷格·萨默斯 | Automatic scene modeling for the 3D camera and 3D video |
CN101610421A (en) * | 2008-06-17 | 2009-12-23 | 深圳华为通信技术有限公司 | Video communication method, Apparatus and system |
CN101742349A (en) * | 2010-01-05 | 2010-06-16 | 浙江大学 | Method for expressing three-dimensional scenes and television system thereof |
CN103049728A (en) * | 2012-12-30 | 2013-04-17 | 成都理想境界科技有限公司 | Method, system and terminal for augmenting reality based on two-dimension code |
CN103528571A (en) * | 2013-10-12 | 2014-01-22 | 上海新跃仪表厂 | Monocular stereo vision relative position/pose measuring method |
CN103971079A (en) * | 2013-01-28 | 2014-08-06 | 腾讯科技(深圳)有限公司 | Augmented reality implementation method and device of two-dimensional code |
US9294873B1 (en) * | 2011-09-22 | 2016-03-22 | Amazon Technologies, Inc. | Enhanced guidance for electronic devices using objects within in a particular area |
-
2016
- 2016-08-30 CN CN201610789231.4A patent/CN106446883B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101208723A (en) * | 2005-02-23 | 2008-06-25 | 克雷格·萨默斯 | Automatic scene modeling for the 3D camera and 3D video |
CN101610421A (en) * | 2008-06-17 | 2009-12-23 | 深圳华为通信技术有限公司 | Video communication method, Apparatus and system |
CN101742349A (en) * | 2010-01-05 | 2010-06-16 | 浙江大学 | Method for expressing three-dimensional scenes and television system thereof |
US9294873B1 (en) * | 2011-09-22 | 2016-03-22 | Amazon Technologies, Inc. | Enhanced guidance for electronic devices using objects within in a particular area |
CN103049728A (en) * | 2012-12-30 | 2013-04-17 | 成都理想境界科技有限公司 | Method, system and terminal for augmenting reality based on two-dimension code |
CN103971079A (en) * | 2013-01-28 | 2014-08-06 | 腾讯科技(深圳)有限公司 | Augmented reality implementation method and device of two-dimensional code |
CN103528571A (en) * | 2013-10-12 | 2014-01-22 | 上海新跃仪表厂 | Monocular stereo vision relative position/pose measuring method |
Also Published As
Publication number | Publication date |
---|---|
CN106446883A (en) | 2017-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106446883B (en) | Scene reconstruction method based on optical label | |
CN109920007B (en) | Three-dimensional imaging device and method based on multispectral photometric stereo and laser scanning | |
CN108267097B (en) | Three-dimensional reconstruction method and device based on binocular three-dimensional scanning system | |
CN104335005B (en) | 3D is scanned and alignment system | |
CN106597374A (en) | Indoor visible positioning method and system based on camera shooting frame analysis | |
CN108154514A (en) | Image processing method, device and equipment | |
US8638988B2 (en) | Movement analysis and/or tracking system | |
CN104299220B (en) | A kind of method that cavity in Kinect depth image carries out real-time filling | |
CN109155843A (en) | Image projection system and image projecting method | |
CN110443898A (en) | A kind of AR intelligent terminal target identification system and method based on deep learning | |
CN106155299B (en) | A kind of pair of smart machine carries out the method and device of gesture control | |
CN108055452A (en) | Image processing method, device and equipment | |
Weinmann et al. | A multi-camera, multi-projector super-resolution framework for structured light | |
CN102567727A (en) | Method and device for replacing background target | |
CN101517568A (en) | System and method for performing motion capture and image reconstruction | |
CN106372701B (en) | A kind of coding of optical label and recognition methods | |
CN107578437A (en) | A kind of depth estimation method based on light-field camera, system and portable terminal | |
CN108592822A (en) | A kind of measuring system and method based on binocular camera and structure light encoding and decoding | |
CN109889799B (en) | Monocular structure light depth perception method and device based on RGBIR camera | |
CN108288289A (en) | A kind of LED visible detection methods and its system for visible light-seeking | |
CN109360254A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107705356A (en) | Image processing method and device | |
CN110428439A (en) | A kind of shadow detection method based on shadow region color saturation property | |
CN109285183A (en) | A kind of multimode video image method for registering based on moving region image definition | |
CN112365578A (en) | Three-dimensional human body model reconstruction system and method based on double cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20211203 Address after: 201306 2nd floor, no.979 Yunhan Road, Lingang New Area, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai Patentee after: Shanghai Guangshi fusion Intelligent Technology Co.,Ltd. Address before: 710075 Room 301, Block A, Innovation Information Building, Xi'an Software Park, No. 2 Science and Technology Road, Xi'an High-tech Zone, Shaanxi Province Patentee before: XI'AN XIAOGUANGZI NETWORK TECHNOLOGY Co.,Ltd. |
|
TR01 | Transfer of patent right |