CN112433641A - Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors - Google Patents

Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors Download PDF

Info

Publication number
CN112433641A
CN112433641A CN202011254165.3A CN202011254165A CN112433641A CN 112433641 A CN112433641 A CN 112433641A CN 202011254165 A CN202011254165 A CN 202011254165A CN 112433641 A CN112433641 A CN 112433641A
Authority
CN
China
Prior art keywords
image
depth
desktop
rgbd
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011254165.3A
Other languages
Chinese (zh)
Other versions
CN112433641B (en
Inventor
宁广良
孙广
王文锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Gooest Media Technology Co ltd
Original Assignee
Dalian Gooest Media Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Gooest Media Technology Co ltd filed Critical Dalian Gooest Media Technology Co ltd
Priority to CN202011254165.3A priority Critical patent/CN112433641B/en
Publication of CN112433641A publication Critical patent/CN112433641A/en
Application granted granted Critical
Publication of CN112433641B publication Critical patent/CN112433641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/382Information transfer, e.g. on bus using universal interface adapter
    • G06F13/385Information transfer, e.g. on bus using universal interface adapter for adaptation of a particular data processing system to different peripheral devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses an automatic calibration desktop prop interaction system with multiple RGBD depth sensors, and relates to the technical field of interactive projection identification. The interactive position calibration of a plurality of RGBD depth sensors is realized by adopting a gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation; after a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image of the real-time depth image and the depth environment background image so as to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates; comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape types of all the props; and finally, the prop type and the desktop coordinate are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect is enhanced.

Description

Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors
The technical field is as follows:
the invention relates to the technical field of interactive projection identification, in particular to an implementation method of an automatic calibration desktop prop interactive system of a multi-RGBD depth sensor.
Background art:
opencv (open Source Computer Vision library) is a cross-platform Computer Vision library issued based on BSD license (open Source), and implements many general algorithms in image processing and Computer Vision, including algorithms such as morphological transformation, thresholding, contour finding, and gray code structured light. The binary gray code is an unweighted code, the single-step self-complementary code of the reflection characteristic and the cycle characteristic eliminates the possibility of occurrence of a great error during random access, belongs to a coding mode of reliable coding and error minimization, and has wide application in the measurement technology.
The RGBD depth sensor is an image sensor with a depth measurement function added on an RGB common camera, and currently, there are several mainstream technical schemes such as binocular, structured light and time of flight (TOF). The RGB binocular adopts RGB image feature point matching and triangulation algorithm to calculate depth, so that the detected scene needs to have good illumination condition and texture characteristic. The structured light scheme adopts active projection of known coding patterns, so that the characteristic matching effect is improved, higher measurement precision can be achieved in a short distance, and the resolution ratio is higher. The time of flight (TOF) scheme measures directly from the time of flight of light, with longer recognition distances, higher measurement accuracy, but lower resolution.
In a desktop prop interaction system, real-time, accurate and stable prop identification and tracking can bring good interaction experience to users. Based on rich application scenes, the desktop prop interaction system is widely concerned by researchers, and the algorithm mainly comprises the aspects of identification and projection coordinate mapping, prop position identification and tracking, prop type identification and the like. By combining the recognition and classification results with the interactive projection technology, an interactive artistic effect of enhancing experience can be displayed.
In the prior art, when a single RGBD depth sensor is used for interactive identification, a large-area interactive projection space cannot be covered due to a small visual angle of a depth identification range. Meanwhile, when the stage property and the projection position are manually calibrated by adopting the depth image, the problems of complicated calibration and debugging and certain deviation of the output position exist, and the experience and the feeling of the interactive effect are influenced. Aiming at the technical defects, an implementation scheme of an automatic calibration desktop prop interaction system of a multi-RGBD depth sensor is provided, the depth sensor based on a structured light or time of flight (TOF) technical scheme is utilized, the depth data of the multi-RGBD depth sensor is fused to realize interaction identification under a large scene, and a gray code structured light-based color image automatic calibration algorithm and a coordinate mapping relation of a depth image and a color image are adopted to realize accurate identification of the position of an interaction prop.
The invention content is as follows:
the invention discloses an automatic calibration desktop prop interaction system with multiple RGBD depth sensors, which consists of multiple RGBD depth sensors, multiple projectors, a computer host, a rectangular desktop and props in different shapes; the plurality of RGBD depth sensors and the plurality of projectors are connected with the corresponding USB interface of the computer host and the video output interface of the display card through data cables, wherein the plurality of RGBD depth sensors are arranged on angle-adjustable and height-adjustable hangers above the desktop, so that the heights of the RGBD depth sensors are kept consistent and parallel to the long edge of the desktop, and the display pictures of the plurality of projectors are subjected to projection fusion through third-party projection fusion software; the interactive position calibration of a plurality of RGBD depth sensors is realized by adopting a gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation; after a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image between a real-time depth image and the depth environment background image to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates and identify length, width and height values of the props; and comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape categories of all the props. And finally, the prop type and the desktop coordinates are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect of the experience is enhanced.
Preferably, the number of the projectors and the RGBD depth sensors is limited to the maximum number supported by the hardware configuration and the operating system of the computer host.
Preferably, the RGBD depth sensor connected with the node computer host in the local area network is virtualized to be local equipment of the projection computer host by adopting a server-side and client-side model based on a TCP/IP communication protocol, so that a scheme of performing large-scene interactive identification based on multiple RGBD depth sensors is realized.
In order to realize the basic concept, the invention provides an implementation method of an automatic calibration desktop prop interaction system of a multi-RGBD depth sensor, which comprises the following steps:
s1, connecting the projectors with the display card output interface of the computer host through video transmission cables, projecting pictures to the desktop space in a hoisting mode, and fusing the projected pictures through third-party projection fusion software;
s2, connecting a plurality of RGBD depth sensors with a USB interface of a computer host through data cables, and installing the RGBD depth sensors on angle-adjustable and height-adjustable hangers above a desktop, wherein the angle-adjustable and height-adjustable hangers correspond to the position of a projector, so that the heights of the RGBD depth sensors are consistent and parallel to the long edges of the desktop;
s3, acquiring depth data and color data of each RGBD depth sensor in real time by interactive projection recognition software, and splicing depth images according to the arrangement positions of the depth data and the color data;
s4, under the condition that no prop is placed on the desktop, the interactive projection recognition software collects a background image of the depth environment between a plurality of RGBD depth sensors and the desktop, and stores the background image;
s5, projecting multi-frame fringe images generated based on a Gray code structured light algorithm to corresponding areas of a desktop according to the positions of the RGBD depth sensors, and collecting data of each frame fringe image by using a color camera of the multi-frame fringe image to obtain a coordinate mapping relation between a fringe projection area and a color image;
s6, acquiring a coordinate mapping relation between the fringe projection area and the corresponding depth image according to the coordinate mapping relation between the depth image and the color image in each RGBD depth sensor;
s7, acquiring a prop identification image with props separated from a desktop environment according to the difference value between the background image and the real-time depth image of the depth environment of the multiple RGBD depth sensors and the identification range threshold parameter;
s8, acquiring the coordinates of the center point of the prop, the length, the width, the convex defects and other geometric information of the prop identification image based on the morphological transformation, the contour search and the convex defect detection algorithm of the OpenCV vision library, and acquiring the height value of the prop by combining the depth information;
s9, comparing the length, width, height, convex defect and other geometric information of the prop with the data of the prop information base to obtain the type of the prop;
s10, acquiring the corresponding item projection coordinate position according to the coordinate of the item center point in the depth image and the mapping relation between the depth image coordinate and the projection coordinate;
and S11, sending information such as the type parameters and the projection coordinates of the props to interactive projection effect software by adopting an OSC protocol, and realizing accurate interactive projection effect based on the types of the props.
Preferably, the interactive projection recognition software collects the depth data and the color data of each RGBD depth sensor in real time, and performs depth image stitching according to the arrangement position, and the image stitching method is based on the arrangement number of the cameras in the horizontal direction and the vertical direction (the number of the cameras in the horizontal direction and the number of the cameras in the vertical direction: (
Figure 100002_DEST_PATH_IMAGE002
,
Figure 100002_DEST_PATH_IMAGE004
) And image resolution (
Figure 100002_DEST_PATH_IMAGE006
,
Figure 100002_DEST_PATH_IMAGE008
) Initializing a stitched image having horizontal and vertical resolutions of
Figure 100002_DEST_PATH_IMAGE010
,
Figure 100002_DEST_PATH_IMAGE012
) And the positions are arranged according to the horizontal and vertical directions of each image sensor: (
Figure 100002_DEST_PATH_IMAGE014
,
Figure 100002_DEST_PATH_IMAGE016
) Setting the image to be in a region of interest (ROI) of a spliced image, and realizing the splicing operation of all the images, wherein (A)
Figure 100002_DEST_PATH_IMAGE018
)。
Preferably, the depth environment background image is obtained by collecting multi-frame depth image data by each RGBD depth sensor in a state that no prop is placed on the desktop
Figure 100002_DEST_PATH_IMAGE020
And acquiring a background image of the intermediate value depth environment based on a corresponding coordinate depth intermediate value algorithm of multi-frame depth data
Figure 100002_DEST_PATH_IMAGE022
Therefore, the stability of the background image of the depth environment is improved.
Preferably, the optical algorithm based on the gray code structure(ii) the width and height of the gray code structured light image corresponding to each RGBD depth sensor are (1) the generated multi-frame fringe image is a multi-frame fringe coding image which comprises light and dark which are arranged in the horizontal direction and the vertical direction and are arranged at equal intervals, (b)
Figure 100002_DEST_PATH_IMAGE024
) Comprises the following steps:
Figure 100002_DEST_PATH_IMAGE026
(1)
Figure 100002_DEST_PATH_IMAGE028
(2)
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE030
in order to project the horizontal resolution of the screen,
Figure 100002_DEST_PATH_IMAGE032
is the projection screen vertical resolution. The display position of the multi-frame stripe-coded image on the projection screen (
Figure 100002_DEST_PATH_IMAGE034
Figure 100002_DEST_PATH_IMAGE036
) Comprises the following steps:
Figure 100002_DEST_PATH_IMAGE038
(3)
Figure 100002_DEST_PATH_IMAGE040
(4)
the structured light image is refreshed frame by frame at a frequency of every second and displayed at the corresponding position of the projection picture, the color camera corresponding to the RGBD depth sensor collects each frame of structured light image and is based on Gray code structured light projection and an image coordinate mapping algorithm, as shown in formula (5),
Figure 100002_DEST_PATH_IMAGE042
(5)
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE044
is the coordinates of the pixels of the color image,
Figure 100002_DEST_PATH_IMAGE046
and projecting the image coordinates of the area for Gray code structured light. According to the coordinate alignment mapping relationship between the depth image and the color image of the RGBD depth sensor, as shown in equation (6),
Figure 100002_DEST_PATH_IMAGE048
(6)
the mapping relationship between the coordinates of the depth image and the coordinates of the image of the light projection area of the gray code structure can be obtained, as shown in formula (7),
Figure 100002_DEST_PATH_IMAGE050
(7)
then, according to the above coordinate transformation and mapping relationship, the mapping relationship between the corresponding projection resolution coordinate and the RGBD depth sensor depth image coordinate can be obtained, as shown in equation (8),
Figure 100002_DEST_PATH_IMAGE052
(8)
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE054
is an index value of
Figure 100002_DEST_PATH_IMAGE056
The pixel coordinates of the RGBD depth sensor depth image of (a),
Figure 100002_DEST_PATH_IMAGE058
is the projection resolution coordinate to which its image pixel coordinates are mapped.
Preferably, the prop identification image of the prop separated from the desktop environment is calculated as shown in formula (9),
Figure 100002_DEST_PATH_IMAGE060
(9)
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE062
an image is identified for the prop with an index value of ID,
Figure 100002_DEST_PATH_IMAGE064
for the purpose of real-time depth image data,
Figure 100002_DEST_PATH_IMAGE066
Figure 100002_DEST_PATH_IMAGE068
is the value range of the difference value between the background image data of the depth environment and the real-time depth image data, wherein,
Figure 100002_DEST_PATH_IMAGE070
Figure 100002_DEST_PATH_IMAGE072
the invention has the beneficial effects that:
according to the method for realizing the automatic calibration of the desktop prop interaction system, the problems that the position calibration of the desktop prop interaction projection is complex and the position deviation exists in a large scene when the multiple RGBD depth sensors are used for conducting the multi-RGBD depth sensors are solved, and a solution for enhancing the interaction experience is provided for the desktop prop interaction system based on the prop type information.
Description of the drawings:
FIG. 1 is an optical representation of the Gray code structure of the present invention;
FIG. 2 is a flow chart of the identification software of an automatically calibrated table prop interaction system of a multiple RGBD depth sensor;
FIG. 3 is a schematic structural diagram of an automatic calibration desktop prop interaction system with multiple RGBD depth sensors.
Detailed Description
Example 1
In this embodiment, a system composed of 2 projectors and 2 RGBD depth sensors is taken as an example.
The invention discloses a realization method of an automatic calibration desktop prop interaction system with multiple RGBD depth sensors. A plurality of RGBD depth sensors and a plurality of projectors correspond USB interface and display card video output interface through data cable and computer host and link to each other, and wherein a plurality of RGBD depth sensors are installed on the stores pylon of adjustable angle and height above the desktop, make its height keep unanimous and parallel with the long limit of desktop, and many projectors show the picture and carry out the projection fusion through third party projection fusion software. And realizing the interactive position calibration of a plurality of RGBD depth sensors by adopting a Gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation. After a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image between a real-time depth image and the depth environment background image to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates and identify length, width and height values of the props; and comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape categories of all the props. And finally, the prop type and the desktop coordinates are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect of the experience is enhanced.
In order to realize the basic concept, the invention designs an implementation method of an automatic calibration desktop prop interaction system of a plurality of RGBD depth sensors, which comprises the following steps:
s1, connecting the projectors with the display card output interface of the computer host through video transmission cables, projecting pictures to the desktop space in a hoisting mode, and fusing the projected pictures through third-party projection fusion software;
s2, connecting a plurality of RGBD depth sensors with a USB interface of a computer host through data cables, and installing the RGBD depth sensors on angle-adjustable and height-adjustable hangers above a desktop, wherein the angle-adjustable and height-adjustable hangers correspond to the position of a projector, so that the heights of the RGBD depth sensors are consistent and parallel to the long edges of the desktop;
s3, acquiring depth data and color data of each RGBD depth sensor in real time by interactive projection recognition software, and splicing depth images according to the arrangement positions of the depth data and the color data;
s4, under the condition that no prop is placed on the desktop, the interactive projection recognition software collects a background image of the depth environment between a plurality of RGBD depth sensors and the desktop, and stores the background image;
s5, projecting multi-frame fringe images generated based on a Gray code structured light algorithm to corresponding areas of a desktop according to the positions of the RGBD depth sensors, and collecting data of each frame fringe image by using a color camera of the multi-frame fringe image to obtain a coordinate mapping relation between a fringe projection area and a color image;
s6, acquiring a coordinate mapping relation between the fringe projection area and the corresponding depth image according to the coordinate mapping relation between the depth image and the color image in each RGBD depth sensor;
s7, acquiring a prop identification image with props separated from a desktop environment according to the difference value between the background image and the real-time depth image of the depth environment of the multiple RGBD depth sensors and the identification range threshold parameter;
s8, acquiring the coordinates of the center point of the prop, the length, the width, the convex defects and other geometric information of the prop identification image based on the morphological transformation, the contour search and the convex defect detection algorithm of the OpenCV vision library, and acquiring the height value of the prop by combining the depth information;
s9, comparing the length, width, height, convex defect and other geometric information of the prop with the data of the prop information base to obtain the type of the prop;
s10, acquiring the corresponding item projection coordinate position according to the coordinate of the item center point in the depth image and the mapping relation between the depth image coordinate and the projection coordinate;
and S11, sending information such as the type parameters and the projection coordinates of the props to interactive projection effect software by adopting an OSC protocol, and realizing accurate interactive projection effect based on the types of the props.
In this embodiment, the 2 projectors are hoisted in the horizontal arrangement direction.
In this embodiment, the 2 RGBD depth sensors are Astra sensors manufactured by austria light corporation, and have a depth image resolution of 640 × 480@30FPS and a color image resolution of 640 × 480@30 FPS.
The invention can be widely applied to various desktop prop interactive projection scenes.
The interactive projection recognition software collects the depth data and the color data of each RGBD depth sensor in real time and carries out depth image splicing according to the arrangement position, and the image splicing method comprises the following steps of (a) arranging the number of cameras according to the horizontal direction and the vertical direction
Figure DEST_PATH_IMAGE002A
,
Figure DEST_PATH_IMAGE004A
) And image resolution (
Figure DEST_PATH_IMAGE006A
,
Figure DEST_PATH_IMAGE008A
) Initializing a stitched image having horizontal and vertical resolutions of
Figure DEST_PATH_IMAGE010A
,
Figure DEST_PATH_IMAGE012A
) And the positions are arranged according to the horizontal and vertical directions of each image sensor: (
Figure DEST_PATH_IMAGE014A
,
Figure DEST_PATH_IMAGE016A
) Setting the image to be in a region of interest (ROI) of a spliced image, and realizing the splicing operation of all the images, wherein (A)
Figure DEST_PATH_IMAGE018A
)。
The depth environment background image is obtained by collecting multi-frame depth image data by each RGBD depth sensor under the condition that no prop is placed on the desktop
Figure DEST_PATH_IMAGE020A
And acquiring a background image of the intermediate value depth environment based on a corresponding coordinate depth intermediate value algorithm of multi-frame depth data
Figure DEST_PATH_IMAGE022A
Therefore, the stability of the background image of the depth environment is improved.
(b) the width and height of the gray code structured light image corresponding to each RGBD depth sensor is (1) the multi-frame stripe image generated based on the gray code structured light algorithm comprises a plurality of frame stripe coded images which are arranged in the horizontal direction and the vertical direction and are alternately arranged in a bright-dark equidistant mode
Figure DEST_PATH_IMAGE024A
) Comprises the following steps:
Figure DEST_PATH_IMAGE026A
(1)
Figure DEST_PATH_IMAGE028A
(2)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE030A
in order to project the horizontal resolution of the screen,
Figure DEST_PATH_IMAGE032A
is the projection screen vertical resolution. Its multi-frame stripe coded imageDisplay position on projection screen (
Figure DEST_PATH_IMAGE034A
Figure DEST_PATH_IMAGE036A
) Comprises the following steps:
Figure DEST_PATH_IMAGE038A
(3)
Figure DEST_PATH_IMAGE040A
(4)
the structured light image is refreshed frame by frame at a frequency of every second and displayed at the corresponding position of the projection picture, the color camera corresponding to the RGBD depth sensor collects each frame of structured light image and is based on Gray code structured light projection and an image coordinate mapping algorithm, as shown in formula (5),
Figure DEST_PATH_IMAGE042A
(5)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE044A
is the coordinates of the pixels of the color image,
Figure DEST_PATH_IMAGE046A
and projecting the image coordinates of the area for Gray code structured light. According to the coordinate alignment mapping relationship between the depth image and the color image of the RGBD depth sensor, as shown in equation (6),
Figure DEST_PATH_IMAGE048A
(6)
the mapping relationship between the coordinates of the depth image and the coordinates of the image of the light projection area of the gray code structure can be obtained, as shown in formula (7),
Figure DEST_PATH_IMAGE050A
(7)
then, according to the above coordinate transformation and mapping relationship, the mapping relationship between the corresponding projection resolution coordinate and the RGBD depth sensor depth image coordinate can be obtained, as shown in equation (8),
Figure DEST_PATH_IMAGE052A
(8)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE054A
is an index value of
Figure DEST_PATH_IMAGE056A
The pixel coordinates of the RGBD depth sensor depth image of (a),
Figure DEST_PATH_IMAGE058A
is the projection resolution coordinate to which its image pixel coordinates are mapped.
The prop identification image of the prop separated from the desktop environment is calculated as shown in formula (9),
Figure DEST_PATH_IMAGE060A
(9)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE062A
an image is identified for the prop with an index value of ID,
Figure DEST_PATH_IMAGE064A
for the purpose of real-time depth image data,
Figure DEST_PATH_IMAGE066A
Figure DEST_PATH_IMAGE068A
is the value range of the difference value between the background image data of the depth environment and the real-time depth image data, wherein,
Figure DEST_PATH_IMAGE070A
Figure DEST_PATH_IMAGE072A
example 2
When the data interface of the RGBD depth sensor is USB3.0 and the computer host only supports connection of one RGBD depth sensor, a multi-host-based local area network can be built to realize data transmission and processing tasks of the multiple RGBD depth sensors.
The system comprises a plurality of RGBD depth sensors, a plurality of projectors, a plurality of computer hosts with the same number as the RGBD depth sensors, a rectangular desktop, props with different shapes and the like. The system comprises a plurality of projectors, a plurality of node computer hosts, a plurality of RGBD depth sensors, a plurality of projection computer hosts, a plurality of projection host display card interfaces, a plurality of RGBD depth sensors, a plurality of projection computer hosts, a plurality of projection fusion software and a plurality of node computer hosts, wherein the plurality of RGBD depth sensors are connected with USB3.0 interfaces of the projection computer hosts and the plurality of node computer hosts respectively, the plurality of RGBD depth sensors are arranged on angle-adjustable and height-adjustable hangers above a desktop, the heights of the plurality of RGBD depth sensors are kept consistent and parallel to the long edges. The multiple node computer hosts send original depth data, color data and depth and color coordinate mapping data to the projection computer host, so that the purpose that the RGBD depth sensor in the local area network is virtualized as local equipment by the projection computer host is achieved. And realizing the interactive position calibration of a plurality of RGBD depth sensors by adopting a Gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation. After a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image between a real-time depth image and the depth environment background image to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates and identify length, width and height values of the props; and comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape categories of all the props. And finally, the prop type and the desktop coordinates are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect of the experience is enhanced.
In order to realize the basic concept, the invention designs an alternative realization method of an automatic calibration desktop prop interaction system of a multi-RGBD depth sensor, which comprises the following steps:
s1, connecting the projectors with a display card of a projection computer host through video transmission cables, projecting pictures to a desktop space in a hoisting mode, and fusing the projected pictures through third-party projection fusion software;
s2, connecting a plurality of RGBD depth sensors with a projection computer host and a plurality of node computer hosts through data cables, and installing the RGBD depth sensors on angle-adjustable and height-adjustable hangers above a desktop, wherein the angle-adjustable and height-adjustable hangers correspond to the positions of projectors, so that the heights of the RGBD depth sensors are consistent and parallel to the long edges of the desktop;
and S3, collecting depth data, color data and depth and color coordinate mapping data of the RGBD depth camera by the plurality of node computer hosts, and transmitting corresponding data to the projection computer host in real time based on a TCP/IP protocol.
S4, acquiring depth data and color data of a local RGBD depth sensor in real time by interactive projection recognition software of the projection computer host, receiving the depth data and the color data of the RGBD depth sensor of the node computer host in the local area network and depth and color coordinate mapping data, and simultaneously performing depth image splicing according to the arrangement position of the RGBD depth sensor;
s5, under the condition that no prop is placed on the desktop, the interactive projection recognition software collects a background image of the depth environment between a plurality of RGBD depth sensors and the desktop, and stores the background image;
s6, projecting multi-frame fringe images generated based on a Gray code structured light algorithm to corresponding areas of a desktop according to the positions of the RGBD depth sensors, and collecting data of each frame fringe image by using a color camera of the multi-frame fringe image to obtain a coordinate mapping relation between a projection area and a color image;
s7, acquiring a coordinate mapping relation between the projection area and the depth image according to the coordinate mapping relation between the depth image and the color image in each RGBD depth sensor;
s8, acquiring a prop identification image with props separated from a desktop environment according to the difference value between the background image and the real-time depth image of the depth environment of the multiple RGBD depth sensors and the identification range threshold parameter;
s9, acquiring the coordinates of the center point of the prop, the length, the width, the convex defects and other geometric information of the prop identification image based on the morphological transformation, the contour search and the convex defect detection algorithm of the OpenCV vision library, and acquiring the height value of the prop by combining the depth information;
s10, comparing the length, width, height, convex defect and other geometric information of the prop with the data of the prop information base to obtain the type of the prop;
s11, acquiring the corresponding item projection coordinate position according to the coordinate of the item center point in the depth image and the mapping relation between the depth image coordinate and the projection coordinate;
and S12, sending information such as the type parameters and the projection coordinates of the props to interactive projection effect software by adopting an OSC protocol, and realizing accurate interactive projection effect based on the types of the props.
The interactive projection recognition software collects the depth data and the color data of each RGBD depth sensor in real time and carries out depth image splicing according to the arrangement position, and the image splicing method comprises the following steps of (a) arranging the number of cameras according to the horizontal direction and the vertical direction
Figure DEST_PATH_IMAGE002AA
,
Figure DEST_PATH_IMAGE004AA
) And image resolution (
Figure DEST_PATH_IMAGE006AA
,
Figure DEST_PATH_IMAGE008AA
) Initializing a stitched image having horizontal and vertical resolutions of
Figure DEST_PATH_IMAGE010AA
,
Figure DEST_PATH_IMAGE012AA
) And the positions are arranged according to the horizontal and vertical directions of each image sensor: (
Figure DEST_PATH_IMAGE014AA
,
Figure DEST_PATH_IMAGE016AA
) Setting the image to be in a region of interest (ROI) of a spliced image, and realizing the splicing operation of all the images, wherein (A)
Figure DEST_PATH_IMAGE018AA
)。
The depth environment background image is obtained by collecting multi-frame depth image data by each RGBD depth sensor under the condition that no prop is placed on the desktop
Figure DEST_PATH_IMAGE020AA
And acquiring a background image of the intermediate value depth environment based on a corresponding coordinate depth intermediate value algorithm of multi-frame depth data
Figure DEST_PATH_IMAGE022AA
Therefore, the stability of the background image of the depth environment is improved.
(b) the width and height of the gray code structured light image corresponding to each RGBD depth sensor is (1) the multi-frame stripe image generated based on the gray code structured light algorithm comprises a plurality of frame stripe coded images which are arranged in the horizontal direction and the vertical direction and are alternately arranged in a bright-dark equidistant mode
Figure DEST_PATH_IMAGE024AA
) Comprises the following steps:
Figure DEST_PATH_IMAGE026AA
(1)
Figure DEST_PATH_IMAGE028AA
(2)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE030AA
in order to project the horizontal resolution of the screen,
Figure DEST_PATH_IMAGE032AA
is the projection screen vertical resolution. The display position of the multi-frame stripe-coded image on the projection screen (
Figure DEST_PATH_IMAGE034AA
Figure DEST_PATH_IMAGE036AA
) Comprises the following steps:
Figure DEST_PATH_IMAGE038AA
(3)
Figure DEST_PATH_IMAGE040AA
(4)
the structured light image is refreshed frame by frame at a frequency of every second and displayed at the corresponding position of the projection picture, the color camera corresponding to the RGBD depth sensor collects each frame of structured light image and is based on Gray code structured light projection and an image coordinate mapping algorithm, as shown in formula (5),
Figure DEST_PATH_IMAGE042AA
(5)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE044AA
is the coordinates of the pixels of the color image,
Figure DEST_PATH_IMAGE046AA
and projecting the image coordinates of the area for Gray code structured light. According to the coordinate alignment mapping relationship between the depth image and the color image of the RGBD depth sensor, as shown in equation (6),
Figure DEST_PATH_IMAGE048AA
(6)
the mapping relationship between the coordinates of the depth image and the coordinates of the image of the light projection area of the gray code structure can be obtained, as shown in formula (7),
Figure DEST_PATH_IMAGE050AA
(7)
then, according to the above coordinate transformation and mapping relationship, the mapping relationship between the corresponding projection resolution coordinate and the RGBD depth sensor depth image coordinate can be obtained, as shown in equation (8),
Figure DEST_PATH_IMAGE052AA
(8)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE054AA
is an index value of
Figure DEST_PATH_IMAGE056AA
The pixel coordinates of the RGBD depth sensor depth image of (a),
Figure DEST_PATH_IMAGE058AA
is the projection resolution coordinate to which its image pixel coordinates are mapped.
The prop identification image of the prop separated from the desktop environment is calculated as shown in formula (9),
Figure DEST_PATH_IMAGE060AA
(9)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE062AA
an image is identified for the prop with an index value of ID,
Figure DEST_PATH_IMAGE064AA
for the purpose of real-time depth image data,
Figure DEST_PATH_IMAGE066AA
Figure DEST_PATH_IMAGE068AA
is the value range of the difference value between the background image data of the depth environment and the real-time depth image data, wherein,
Figure DEST_PATH_IMAGE070AA
Figure DEST_PATH_IMAGE072AA
while the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An automatic calibration desktop prop interaction system with multiple RGBD depth sensors is characterized by comprising a plurality of RGBD depth sensors, a plurality of projectors, a computer host, a rectangular desktop and props of different shapes, wherein the RGBD depth sensors and the projectors are connected with a USB interface and a video card video output interface corresponding to the computer host through data cables, the RGBD depth sensors are mounted on angle-adjustable and height-adjustable hangers above the desktop, so that the height of the RGBD depth sensors is kept consistent and parallel to the long edges of the desktop, and display pictures of the projectors are subjected to projection fusion through third-party projection fusion software; the interactive position calibration of a plurality of RGBD depth sensors is realized by adopting a gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation; after a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image of a real-time depth image and the depth environment background image to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates and identify length, width and height values of the props; comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape types of all the props; and finally, the prop type and the desktop coordinates are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect of the experience is enhanced.
2. The system of claim 1, wherein the number of projectors and RGBD depth sensors is limited to a maximum number supported by a hardware configuration and an operating system of a computer host.
3. The automatic calibration desktop prop interaction system of the multiple RGBD depth sensors of claim 1, wherein a server model and a client model based on a TCP/IP communication protocol are adopted to virtualize the RGBD depth sensors connected to the node computer hosts in the LAN as local devices of a projection computer host, so as to implement a scheme for performing large-scene interaction recognition based on the multiple RGBD depth sensors.
4. The implementation method of the automatic calibration desktop prop interaction system of the multiple RGBD depth sensors in claim 1, comprising the following steps:
s1, connecting a plurality of projectors with a display card output interface of a computer host through video transmission cables, projecting pictures to a desktop space in a hoisting mode, and fusing the projected pictures through third-party projection fusion software;
s2, connecting a plurality of RGBD depth sensors with a USB interface of a computer host through data cables, and installing the RGBD depth sensors on angle-adjustable and height-adjustable hangers above a desktop, wherein the angle-adjustable and height-adjustable hangers correspond to the position of a projector, so that the heights of the RGBD depth sensors are consistent and parallel to the long edges of the desktop;
s3, acquiring depth data and color data of each RGBD depth sensor in real time by interactive projection recognition software, and splicing depth images according to the arrangement positions of the depth data and the color data;
s4, under the condition that no prop is placed on the desktop, the interactive projection recognition software collects a plurality of depth environment background images between the RGBD depth sensor and the desktop, and stores the background images;
s5, projecting multi-frame fringe images generated based on a Gray code structured light algorithm to corresponding areas of a desktop according to the positions of the RGBD depth sensors, and collecting data of each frame fringe image by using a color camera of the multi-frame fringe image to obtain a coordinate mapping relation between a fringe projection area and a color image;
s6, acquiring a coordinate mapping relation between the fringe projection area and the corresponding depth image according to the coordinate mapping relation between the depth image and the color image in each RGBD depth sensor;
s7, acquiring a prop identification image with props separated from a desktop environment according to the difference value between the background image and the real-time depth image of the depth environment of the multiple RGBD depth sensors and the identification range threshold parameter;
s8, acquiring the coordinates of the center point of the prop, the length, the width, the convex defects and other geometric information of the prop identification image based on the morphological transformation, the contour search and the convex defect detection algorithm of the OpenCV vision library, and acquiring the height value of the prop by combining the depth information;
s9, comparing the length, width, height, convex defect and other geometric information of the prop with the data of the prop information base to obtain the type of the prop;
s10, acquiring the corresponding item projection coordinate position according to the coordinate of the item center point in the depth image and the mapping relation between the depth image coordinate and the projection coordinate;
and S11, sending information such as the type parameters and the projection coordinates of the props to interactive projection effect software by adopting an OSC protocol, and realizing accurate interactive projection effect based on the types of the props.
5. The method as claimed in claim 4, wherein the interactive projection recognition software collects the depth data and color data of each RGBD depth sensor in real time, and performs depth image stitching according to the arrangement position thereof, wherein the image stitching method is based on the arrangement number of the cameras in the horizontal direction and the vertical direction (the number of the cameras in the horizontal direction and the number of the cameras in the vertical direction
Figure DEST_PATH_IMAGE002
,
Figure DEST_PATH_IMAGE004
) And image resolution (
Figure DEST_PATH_IMAGE006
,
Figure DEST_PATH_IMAGE008
) Initializing a stitched image having horizontal and vertical resolutions of
Figure DEST_PATH_IMAGE010
,
Figure DEST_PATH_IMAGE012
) And the positions are arranged according to the horizontal and vertical directions of each image sensor: (
Figure DEST_PATH_IMAGE014
,
Figure DEST_PATH_IMAGE016
) Setting the image to be in a region of interest (ROI) of a spliced image, and realizing the splicing operation of all the images, wherein (A)
Figure DEST_PATH_IMAGE018
)。
6. The method of claim 4, wherein the depth environment background image is obtained by collecting multiple frames of depth image data by each RGBD depth sensor when no prop is placed on a desktop of the desktop
Figure DEST_PATH_IMAGE020
And based on corresponding coordinate depth intermediate values of multiple frames of depth dataAlgorithm, obtaining background image of intermediate value depth environment
Figure DEST_PATH_IMAGE022
Therefore, the stability of the background image of the depth environment is improved.
7. The method of claim 4, wherein the multi-frame stripe image generated based on the Gray code structured light algorithm is a multi-frame stripe encoded image comprising light and dark arranged in horizontal and vertical directions and equally spaced, and the width and height of the Gray code structured light image corresponding to each RGBD depth sensor (1) ((
Figure DEST_PATH_IMAGE024
) Comprises the following steps:
Figure DEST_PATH_IMAGE026
(1)
Figure DEST_PATH_IMAGE028
(2)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE030
in order to project the horizontal resolution of the screen,
Figure DEST_PATH_IMAGE032
is the projection screen vertical resolution.
8. The display position of the multi-frame stripe-coded image on the projection screen (
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE036
) Comprises the following steps:
Figure DEST_PATH_IMAGE038
(3)
Figure DEST_PATH_IMAGE040
(4)
the structured light image is refreshed frame by frame at a frequency of every second and displayed at the corresponding position of the projection picture, the color camera corresponding to the RGBD depth sensor collects each frame of structured light image and is based on Gray code structured light projection and an image coordinate mapping algorithm, as shown in formula (5),
Figure DEST_PATH_IMAGE042
(5)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE044
is the coordinates of the pixels of the color image,
Figure DEST_PATH_IMAGE046
and projecting the image coordinates of the area for Gray code structured light.
9. According to the coordinate alignment mapping relationship between the depth image and the color image of the RGBD depth sensor, as shown in equation (6),
Figure DEST_PATH_IMAGE048
(6)
the mapping relationship between the coordinates of the depth image and the coordinates of the image of the light projection area of the gray code structure can be obtained, as shown in formula (7),
Figure DEST_PATH_IMAGE050
(7)
then, according to the above coordinate transformation and mapping relationship, the mapping relationship between the corresponding projection resolution coordinate and the RGBD depth sensor depth image coordinate can be obtained, as shown in equation (8),
Figure DEST_PATH_IMAGE052
(8)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE054
is an index value of
Figure DEST_PATH_IMAGE056
The pixel coordinates of the RGBD depth sensor depth image of (a),
Figure DEST_PATH_IMAGE058
is the projection resolution coordinate to which its image pixel coordinates are mapped.
10. The method for implementing the automatic calibration desktop prop interaction system of multiple RGBD depth sensors in claim 4, wherein the prop identification image of the prop separated from the desktop environment is calculated as shown in equation (9),
Figure DEST_PATH_IMAGE060
(9)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE062
an image is identified for the prop with an index value of ID,
Figure DEST_PATH_IMAGE064
for the purpose of real-time depth image data,
Figure DEST_PATH_IMAGE066
Figure DEST_PATH_IMAGE068
is the value range of the difference value between the background image data of the depth environment and the real-time depth image data, wherein,
Figure DEST_PATH_IMAGE070
Figure DEST_PATH_IMAGE072
CN202011254165.3A 2020-11-11 2020-11-11 Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors Active CN112433641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011254165.3A CN112433641B (en) 2020-11-11 2020-11-11 Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011254165.3A CN112433641B (en) 2020-11-11 2020-11-11 Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors

Publications (2)

Publication Number Publication Date
CN112433641A true CN112433641A (en) 2021-03-02
CN112433641B CN112433641B (en) 2022-06-17

Family

ID=74700416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011254165.3A Active CN112433641B (en) 2020-11-11 2020-11-11 Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors

Country Status (1)

Country Link
CN (1) CN112433641B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409259A (en) * 2021-06-09 2021-09-17 电子科技大学 Image characteristic information-based precision workpiece stage inclination angle detection method
CN114760450A (en) * 2022-04-12 2022-07-15 昆明云岸数字科技有限公司 Multi-element desktop projection interaction implementation method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000350230A (en) * 1999-06-07 2000-12-15 Olympus Optical Co Ltd Image projection system
JP2008061160A (en) * 2006-09-04 2008-03-13 Seiko Epson Corp Multiprojection system
CN103455141A (en) * 2013-08-15 2013-12-18 无锡触角科技有限公司 Interactive projection system and correction method of depth sensor and projector of interactive projection system
US20140177909A1 (en) * 2012-12-24 2014-06-26 Industrial Technology Research Institute Three-dimensional interactive device and operation method thereof
CN107272910A (en) * 2017-07-24 2017-10-20 武汉秀宝软件有限公司 A kind of projection interactive method and system based on rock-climbing project
CN107517366A (en) * 2017-08-23 2017-12-26 上海喵呜信息科技有限公司 Projector's image information method for automatic measurement based on RGBD
CN107610236A (en) * 2017-08-21 2018-01-19 武汉秀宝软件有限公司 A kind of exchange method and system based on figure identification
WO2019156731A1 (en) * 2018-02-09 2019-08-15 Bayerische Motoren Werke Aktiengesellschaft Methods for object detection in a scene represented by depth data and image data
CN110880161A (en) * 2019-11-21 2020-03-13 大庆思特传媒科技有限公司 Depth image splicing and fusing method and system for multi-host multi-depth camera
CN110942092A (en) * 2019-11-21 2020-03-31 大庆思特传媒科技有限公司 Graphic image recognition method and recognition system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000350230A (en) * 1999-06-07 2000-12-15 Olympus Optical Co Ltd Image projection system
JP2008061160A (en) * 2006-09-04 2008-03-13 Seiko Epson Corp Multiprojection system
US20140177909A1 (en) * 2012-12-24 2014-06-26 Industrial Technology Research Institute Three-dimensional interactive device and operation method thereof
CN103455141A (en) * 2013-08-15 2013-12-18 无锡触角科技有限公司 Interactive projection system and correction method of depth sensor and projector of interactive projection system
CN107272910A (en) * 2017-07-24 2017-10-20 武汉秀宝软件有限公司 A kind of projection interactive method and system based on rock-climbing project
CN107610236A (en) * 2017-08-21 2018-01-19 武汉秀宝软件有限公司 A kind of exchange method and system based on figure identification
CN107517366A (en) * 2017-08-23 2017-12-26 上海喵呜信息科技有限公司 Projector's image information method for automatic measurement based on RGBD
WO2019156731A1 (en) * 2018-02-09 2019-08-15 Bayerische Motoren Werke Aktiengesellschaft Methods for object detection in a scene represented by depth data and image data
CN110880161A (en) * 2019-11-21 2020-03-13 大庆思特传媒科技有限公司 Depth image splicing and fusing method and system for multi-host multi-depth camera
CN110942092A (en) * 2019-11-21 2020-03-31 大庆思特传媒科技有限公司 Graphic image recognition method and recognition system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409259A (en) * 2021-06-09 2021-09-17 电子科技大学 Image characteristic information-based precision workpiece stage inclination angle detection method
CN113409259B (en) * 2021-06-09 2022-04-19 电子科技大学 Image characteristic information-based precision workpiece stage inclination angle detection method
CN114760450A (en) * 2022-04-12 2022-07-15 昆明云岸数字科技有限公司 Multi-element desktop projection interaction implementation method and device

Also Published As

Publication number Publication date
CN112433641B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
US10194135B2 (en) Three-dimensional depth perception apparatus and method
US6781618B2 (en) Hand-held 3D vision system
US11816829B1 (en) Collaborative disparity decomposition
US6917702B2 (en) Calibration of multiple cameras for a turntable-based 3D scanner
CN101697233B (en) Structured light-based three-dimensional object surface reconstruction method
US7456842B2 (en) Color edge based system and method for determination of 3D surface topology
US20030066949A1 (en) Method and apparatus for scanning three-dimensional objects
US20030071194A1 (en) Method and apparatus for scanning three-dimensional objects
US20070195160A1 (en) Angled axis machine vision system and method
TWI696906B (en) Method for processing a floor
CN110390719A (en) Based on flight time point cloud reconstructing apparatus
CN110827392B (en) Monocular image three-dimensional reconstruction method, system and device
KR20110059506A (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
CN112433641B (en) Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors
WO2022088881A1 (en) Method, apparatus and system for generating a three-dimensional model of a scene
JP7092615B2 (en) Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
CN111915723A (en) Indoor three-dimensional panorama construction method and system
CN109064533B (en) 3D roaming method and system
CN110880161B (en) Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras
CN108645353B (en) Three-dimensional data acquisition system and method based on multi-frame random binary coding light field
CN111914790B (en) Real-time human body rotation angle identification method based on double cameras under different scenes
Agouris et al. Automation and digital photogrammetric workstations
CN112433640B (en) Automatic calibration interactive projection system of multiple image sensors and implementation method thereof
Chen et al. Integration of multiple views for a 3-d indoor surveillance system
JPH08136222A (en) Method and device for three-dimensional measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant