CN112433641A - Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors - Google Patents
Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors Download PDFInfo
- Publication number
- CN112433641A CN112433641A CN202011254165.3A CN202011254165A CN112433641A CN 112433641 A CN112433641 A CN 112433641A CN 202011254165 A CN202011254165 A CN 202011254165A CN 112433641 A CN112433641 A CN 112433641A
- Authority
- CN
- China
- Prior art keywords
- image
- depth
- desktop
- rgbd
- projection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 title claims description 17
- 238000013507 mapping Methods 0.000 claims abstract description 46
- 230000002452 interceptive effect Effects 0.000 claims abstract description 45
- 230000000694 effects Effects 0.000 claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 16
- 238000004891 communication Methods 0.000 claims abstract description 7
- 239000000126 substance Substances 0.000 claims description 16
- 230000007547 defect Effects 0.000 claims description 13
- 230000005540 biological transmission Effects 0.000 claims description 5
- 230000006740 morphological transformation Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000005259 measurement Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0425—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/382—Information transfer, e.g. on bus using universal interface adapter
- G06F13/385—Information transfer, e.g. on bus using universal interface adapter for adaptation of a particular data processing system to different peripheral devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G06T5/90—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Abstract
The invention discloses an automatic calibration desktop prop interaction system with multiple RGBD depth sensors, and relates to the technical field of interactive projection identification. The interactive position calibration of a plurality of RGBD depth sensors is realized by adopting a gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation; after a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image of the real-time depth image and the depth environment background image so as to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates; comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape types of all the props; and finally, the prop type and the desktop coordinate are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect is enhanced.
Description
The technical field is as follows:
the invention relates to the technical field of interactive projection identification, in particular to an implementation method of an automatic calibration desktop prop interactive system of a multi-RGBD depth sensor.
Background art:
opencv (open Source Computer Vision library) is a cross-platform Computer Vision library issued based on BSD license (open Source), and implements many general algorithms in image processing and Computer Vision, including algorithms such as morphological transformation, thresholding, contour finding, and gray code structured light. The binary gray code is an unweighted code, the single-step self-complementary code of the reflection characteristic and the cycle characteristic eliminates the possibility of occurrence of a great error during random access, belongs to a coding mode of reliable coding and error minimization, and has wide application in the measurement technology.
The RGBD depth sensor is an image sensor with a depth measurement function added on an RGB common camera, and currently, there are several mainstream technical schemes such as binocular, structured light and time of flight (TOF). The RGB binocular adopts RGB image feature point matching and triangulation algorithm to calculate depth, so that the detected scene needs to have good illumination condition and texture characteristic. The structured light scheme adopts active projection of known coding patterns, so that the characteristic matching effect is improved, higher measurement precision can be achieved in a short distance, and the resolution ratio is higher. The time of flight (TOF) scheme measures directly from the time of flight of light, with longer recognition distances, higher measurement accuracy, but lower resolution.
In a desktop prop interaction system, real-time, accurate and stable prop identification and tracking can bring good interaction experience to users. Based on rich application scenes, the desktop prop interaction system is widely concerned by researchers, and the algorithm mainly comprises the aspects of identification and projection coordinate mapping, prop position identification and tracking, prop type identification and the like. By combining the recognition and classification results with the interactive projection technology, an interactive artistic effect of enhancing experience can be displayed.
In the prior art, when a single RGBD depth sensor is used for interactive identification, a large-area interactive projection space cannot be covered due to a small visual angle of a depth identification range. Meanwhile, when the stage property and the projection position are manually calibrated by adopting the depth image, the problems of complicated calibration and debugging and certain deviation of the output position exist, and the experience and the feeling of the interactive effect are influenced. Aiming at the technical defects, an implementation scheme of an automatic calibration desktop prop interaction system of a multi-RGBD depth sensor is provided, the depth sensor based on a structured light or time of flight (TOF) technical scheme is utilized, the depth data of the multi-RGBD depth sensor is fused to realize interaction identification under a large scene, and a gray code structured light-based color image automatic calibration algorithm and a coordinate mapping relation of a depth image and a color image are adopted to realize accurate identification of the position of an interaction prop.
The invention content is as follows:
the invention discloses an automatic calibration desktop prop interaction system with multiple RGBD depth sensors, which consists of multiple RGBD depth sensors, multiple projectors, a computer host, a rectangular desktop and props in different shapes; the plurality of RGBD depth sensors and the plurality of projectors are connected with the corresponding USB interface of the computer host and the video output interface of the display card through data cables, wherein the plurality of RGBD depth sensors are arranged on angle-adjustable and height-adjustable hangers above the desktop, so that the heights of the RGBD depth sensors are kept consistent and parallel to the long edge of the desktop, and the display pictures of the plurality of projectors are subjected to projection fusion through third-party projection fusion software; the interactive position calibration of a plurality of RGBD depth sensors is realized by adopting a gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation; after a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image between a real-time depth image and the depth environment background image to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates and identify length, width and height values of the props; and comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape categories of all the props. And finally, the prop type and the desktop coordinates are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect of the experience is enhanced.
Preferably, the number of the projectors and the RGBD depth sensors is limited to the maximum number supported by the hardware configuration and the operating system of the computer host.
Preferably, the RGBD depth sensor connected with the node computer host in the local area network is virtualized to be local equipment of the projection computer host by adopting a server-side and client-side model based on a TCP/IP communication protocol, so that a scheme of performing large-scene interactive identification based on multiple RGBD depth sensors is realized.
In order to realize the basic concept, the invention provides an implementation method of an automatic calibration desktop prop interaction system of a multi-RGBD depth sensor, which comprises the following steps:
s1, connecting the projectors with the display card output interface of the computer host through video transmission cables, projecting pictures to the desktop space in a hoisting mode, and fusing the projected pictures through third-party projection fusion software;
s2, connecting a plurality of RGBD depth sensors with a USB interface of a computer host through data cables, and installing the RGBD depth sensors on angle-adjustable and height-adjustable hangers above a desktop, wherein the angle-adjustable and height-adjustable hangers correspond to the position of a projector, so that the heights of the RGBD depth sensors are consistent and parallel to the long edges of the desktop;
s3, acquiring depth data and color data of each RGBD depth sensor in real time by interactive projection recognition software, and splicing depth images according to the arrangement positions of the depth data and the color data;
s4, under the condition that no prop is placed on the desktop, the interactive projection recognition software collects a background image of the depth environment between a plurality of RGBD depth sensors and the desktop, and stores the background image;
s5, projecting multi-frame fringe images generated based on a Gray code structured light algorithm to corresponding areas of a desktop according to the positions of the RGBD depth sensors, and collecting data of each frame fringe image by using a color camera of the multi-frame fringe image to obtain a coordinate mapping relation between a fringe projection area and a color image;
s6, acquiring a coordinate mapping relation between the fringe projection area and the corresponding depth image according to the coordinate mapping relation between the depth image and the color image in each RGBD depth sensor;
s7, acquiring a prop identification image with props separated from a desktop environment according to the difference value between the background image and the real-time depth image of the depth environment of the multiple RGBD depth sensors and the identification range threshold parameter;
s8, acquiring the coordinates of the center point of the prop, the length, the width, the convex defects and other geometric information of the prop identification image based on the morphological transformation, the contour search and the convex defect detection algorithm of the OpenCV vision library, and acquiring the height value of the prop by combining the depth information;
s9, comparing the length, width, height, convex defect and other geometric information of the prop with the data of the prop information base to obtain the type of the prop;
s10, acquiring the corresponding item projection coordinate position according to the coordinate of the item center point in the depth image and the mapping relation between the depth image coordinate and the projection coordinate;
and S11, sending information such as the type parameters and the projection coordinates of the props to interactive projection effect software by adopting an OSC protocol, and realizing accurate interactive projection effect based on the types of the props.
Preferably, the interactive projection recognition software collects the depth data and the color data of each RGBD depth sensor in real time, and performs depth image stitching according to the arrangement position, and the image stitching method is based on the arrangement number of the cameras in the horizontal direction and the vertical direction (the number of the cameras in the horizontal direction and the number of the cameras in the vertical direction: (,) And image resolution (,) Initializing a stitched image having horizontal and vertical resolutions of,) And the positions are arranged according to the horizontal and vertical directions of each image sensor: (,) Setting the image to be in a region of interest (ROI) of a spliced image, and realizing the splicing operation of all the images, wherein (A))。
Preferably, the depth environment background image is obtained by collecting multi-frame depth image data by each RGBD depth sensor in a state that no prop is placed on the desktopAnd acquiring a background image of the intermediate value depth environment based on a corresponding coordinate depth intermediate value algorithm of multi-frame depth dataTherefore, the stability of the background image of the depth environment is improved.
Preferably, the optical algorithm based on the gray code structure(ii) the width and height of the gray code structured light image corresponding to each RGBD depth sensor are (1) the generated multi-frame fringe image is a multi-frame fringe coding image which comprises light and dark which are arranged in the horizontal direction and the vertical direction and are arranged at equal intervals, (b)) Comprises the following steps:
wherein the content of the first and second substances,in order to project the horizontal resolution of the screen,is the projection screen vertical resolution. The display position of the multi-frame stripe-coded image on the projection screen (,) Comprises the following steps:
the structured light image is refreshed frame by frame at a frequency of every second and displayed at the corresponding position of the projection picture, the color camera corresponding to the RGBD depth sensor collects each frame of structured light image and is based on Gray code structured light projection and an image coordinate mapping algorithm, as shown in formula (5),
wherein the content of the first and second substances,is the coordinates of the pixels of the color image,and projecting the image coordinates of the area for Gray code structured light. According to the coordinate alignment mapping relationship between the depth image and the color image of the RGBD depth sensor, as shown in equation (6),
the mapping relationship between the coordinates of the depth image and the coordinates of the image of the light projection area of the gray code structure can be obtained, as shown in formula (7), (7)
then, according to the above coordinate transformation and mapping relationship, the mapping relationship between the corresponding projection resolution coordinate and the RGBD depth sensor depth image coordinate can be obtained, as shown in equation (8),
wherein the content of the first and second substances,is an index value ofThe pixel coordinates of the RGBD depth sensor depth image of (a),is the projection resolution coordinate to which its image pixel coordinates are mapped.
Preferably, the prop identification image of the prop separated from the desktop environment is calculated as shown in formula (9),
wherein the content of the first and second substances,an image is identified for the prop with an index value of ID,for the purpose of real-time depth image data,,is the value range of the difference value between the background image data of the depth environment and the real-time depth image data, wherein,,。
the invention has the beneficial effects that:
according to the method for realizing the automatic calibration of the desktop prop interaction system, the problems that the position calibration of the desktop prop interaction projection is complex and the position deviation exists in a large scene when the multiple RGBD depth sensors are used for conducting the multi-RGBD depth sensors are solved, and a solution for enhancing the interaction experience is provided for the desktop prop interaction system based on the prop type information.
Description of the drawings:
FIG. 1 is an optical representation of the Gray code structure of the present invention;
FIG. 2 is a flow chart of the identification software of an automatically calibrated table prop interaction system of a multiple RGBD depth sensor;
FIG. 3 is a schematic structural diagram of an automatic calibration desktop prop interaction system with multiple RGBD depth sensors.
Detailed Description
Example 1
In this embodiment, a system composed of 2 projectors and 2 RGBD depth sensors is taken as an example.
The invention discloses a realization method of an automatic calibration desktop prop interaction system with multiple RGBD depth sensors. A plurality of RGBD depth sensors and a plurality of projectors correspond USB interface and display card video output interface through data cable and computer host and link to each other, and wherein a plurality of RGBD depth sensors are installed on the stores pylon of adjustable angle and height above the desktop, make its height keep unanimous and parallel with the long limit of desktop, and many projectors show the picture and carry out the projection fusion through third party projection fusion software. And realizing the interactive position calibration of a plurality of RGBD depth sensors by adopting a Gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation. After a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image between a real-time depth image and the depth environment background image to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates and identify length, width and height values of the props; and comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape categories of all the props. And finally, the prop type and the desktop coordinates are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect of the experience is enhanced.
In order to realize the basic concept, the invention designs an implementation method of an automatic calibration desktop prop interaction system of a plurality of RGBD depth sensors, which comprises the following steps:
s1, connecting the projectors with the display card output interface of the computer host through video transmission cables, projecting pictures to the desktop space in a hoisting mode, and fusing the projected pictures through third-party projection fusion software;
s2, connecting a plurality of RGBD depth sensors with a USB interface of a computer host through data cables, and installing the RGBD depth sensors on angle-adjustable and height-adjustable hangers above a desktop, wherein the angle-adjustable and height-adjustable hangers correspond to the position of a projector, so that the heights of the RGBD depth sensors are consistent and parallel to the long edges of the desktop;
s3, acquiring depth data and color data of each RGBD depth sensor in real time by interactive projection recognition software, and splicing depth images according to the arrangement positions of the depth data and the color data;
s4, under the condition that no prop is placed on the desktop, the interactive projection recognition software collects a background image of the depth environment between a plurality of RGBD depth sensors and the desktop, and stores the background image;
s5, projecting multi-frame fringe images generated based on a Gray code structured light algorithm to corresponding areas of a desktop according to the positions of the RGBD depth sensors, and collecting data of each frame fringe image by using a color camera of the multi-frame fringe image to obtain a coordinate mapping relation between a fringe projection area and a color image;
s6, acquiring a coordinate mapping relation between the fringe projection area and the corresponding depth image according to the coordinate mapping relation between the depth image and the color image in each RGBD depth sensor;
s7, acquiring a prop identification image with props separated from a desktop environment according to the difference value between the background image and the real-time depth image of the depth environment of the multiple RGBD depth sensors and the identification range threshold parameter;
s8, acquiring the coordinates of the center point of the prop, the length, the width, the convex defects and other geometric information of the prop identification image based on the morphological transformation, the contour search and the convex defect detection algorithm of the OpenCV vision library, and acquiring the height value of the prop by combining the depth information;
s9, comparing the length, width, height, convex defect and other geometric information of the prop with the data of the prop information base to obtain the type of the prop;
s10, acquiring the corresponding item projection coordinate position according to the coordinate of the item center point in the depth image and the mapping relation between the depth image coordinate and the projection coordinate;
and S11, sending information such as the type parameters and the projection coordinates of the props to interactive projection effect software by adopting an OSC protocol, and realizing accurate interactive projection effect based on the types of the props.
In this embodiment, the 2 projectors are hoisted in the horizontal arrangement direction.
In this embodiment, the 2 RGBD depth sensors are Astra sensors manufactured by austria light corporation, and have a depth image resolution of 640 × 480@30FPS and a color image resolution of 640 × 480@30 FPS.
The invention can be widely applied to various desktop prop interactive projection scenes.
The interactive projection recognition software collects the depth data and the color data of each RGBD depth sensor in real time and carries out depth image splicing according to the arrangement position, and the image splicing method comprises the following steps of (a) arranging the number of cameras according to the horizontal direction and the vertical direction,) And image resolution (,) Initializing a stitched image having horizontal and vertical resolutions of,) And the positions are arranged according to the horizontal and vertical directions of each image sensor: (,) Setting the image to be in a region of interest (ROI) of a spliced image, and realizing the splicing operation of all the images, wherein (A))。
The depth environment background image is obtained by collecting multi-frame depth image data by each RGBD depth sensor under the condition that no prop is placed on the desktopAnd acquiring a background image of the intermediate value depth environment based on a corresponding coordinate depth intermediate value algorithm of multi-frame depth dataTherefore, the stability of the background image of the depth environment is improved.
(b) the width and height of the gray code structured light image corresponding to each RGBD depth sensor is (1) the multi-frame stripe image generated based on the gray code structured light algorithm comprises a plurality of frame stripe coded images which are arranged in the horizontal direction and the vertical direction and are alternately arranged in a bright-dark equidistant mode) Comprises the following steps:
wherein the content of the first and second substances,in order to project the horizontal resolution of the screen,is the projection screen vertical resolution. Its multi-frame stripe coded imageDisplay position on projection screen (,) Comprises the following steps:
the structured light image is refreshed frame by frame at a frequency of every second and displayed at the corresponding position of the projection picture, the color camera corresponding to the RGBD depth sensor collects each frame of structured light image and is based on Gray code structured light projection and an image coordinate mapping algorithm, as shown in formula (5),
wherein the content of the first and second substances,is the coordinates of the pixels of the color image,and projecting the image coordinates of the area for Gray code structured light. According to the coordinate alignment mapping relationship between the depth image and the color image of the RGBD depth sensor, as shown in equation (6),
the mapping relationship between the coordinates of the depth image and the coordinates of the image of the light projection area of the gray code structure can be obtained, as shown in formula (7), (7)
then, according to the above coordinate transformation and mapping relationship, the mapping relationship between the corresponding projection resolution coordinate and the RGBD depth sensor depth image coordinate can be obtained, as shown in equation (8),
wherein the content of the first and second substances,is an index value ofThe pixel coordinates of the RGBD depth sensor depth image of (a),is the projection resolution coordinate to which its image pixel coordinates are mapped.
The prop identification image of the prop separated from the desktop environment is calculated as shown in formula (9),
wherein the content of the first and second substances,an image is identified for the prop with an index value of ID,for the purpose of real-time depth image data,,is the value range of the difference value between the background image data of the depth environment and the real-time depth image data, wherein,,。
example 2
When the data interface of the RGBD depth sensor is USB3.0 and the computer host only supports connection of one RGBD depth sensor, a multi-host-based local area network can be built to realize data transmission and processing tasks of the multiple RGBD depth sensors.
The system comprises a plurality of RGBD depth sensors, a plurality of projectors, a plurality of computer hosts with the same number as the RGBD depth sensors, a rectangular desktop, props with different shapes and the like. The system comprises a plurality of projectors, a plurality of node computer hosts, a plurality of RGBD depth sensors, a plurality of projection computer hosts, a plurality of projection host display card interfaces, a plurality of RGBD depth sensors, a plurality of projection computer hosts, a plurality of projection fusion software and a plurality of node computer hosts, wherein the plurality of RGBD depth sensors are connected with USB3.0 interfaces of the projection computer hosts and the plurality of node computer hosts respectively, the plurality of RGBD depth sensors are arranged on angle-adjustable and height-adjustable hangers above a desktop, the heights of the plurality of RGBD depth sensors are kept consistent and parallel to the long edges. The multiple node computer hosts send original depth data, color data and depth and color coordinate mapping data to the projection computer host, so that the purpose that the RGBD depth sensor in the local area network is virtualized as local equipment by the projection computer host is achieved. And realizing the interactive position calibration of a plurality of RGBD depth sensors by adopting a Gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation. After a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image between a real-time depth image and the depth environment background image to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates and identify length, width and height values of the props; and comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape categories of all the props. And finally, the prop type and the desktop coordinates are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect of the experience is enhanced.
In order to realize the basic concept, the invention designs an alternative realization method of an automatic calibration desktop prop interaction system of a multi-RGBD depth sensor, which comprises the following steps:
s1, connecting the projectors with a display card of a projection computer host through video transmission cables, projecting pictures to a desktop space in a hoisting mode, and fusing the projected pictures through third-party projection fusion software;
s2, connecting a plurality of RGBD depth sensors with a projection computer host and a plurality of node computer hosts through data cables, and installing the RGBD depth sensors on angle-adjustable and height-adjustable hangers above a desktop, wherein the angle-adjustable and height-adjustable hangers correspond to the positions of projectors, so that the heights of the RGBD depth sensors are consistent and parallel to the long edges of the desktop;
and S3, collecting depth data, color data and depth and color coordinate mapping data of the RGBD depth camera by the plurality of node computer hosts, and transmitting corresponding data to the projection computer host in real time based on a TCP/IP protocol.
S4, acquiring depth data and color data of a local RGBD depth sensor in real time by interactive projection recognition software of the projection computer host, receiving the depth data and the color data of the RGBD depth sensor of the node computer host in the local area network and depth and color coordinate mapping data, and simultaneously performing depth image splicing according to the arrangement position of the RGBD depth sensor;
s5, under the condition that no prop is placed on the desktop, the interactive projection recognition software collects a background image of the depth environment between a plurality of RGBD depth sensors and the desktop, and stores the background image;
s6, projecting multi-frame fringe images generated based on a Gray code structured light algorithm to corresponding areas of a desktop according to the positions of the RGBD depth sensors, and collecting data of each frame fringe image by using a color camera of the multi-frame fringe image to obtain a coordinate mapping relation between a projection area and a color image;
s7, acquiring a coordinate mapping relation between the projection area and the depth image according to the coordinate mapping relation between the depth image and the color image in each RGBD depth sensor;
s8, acquiring a prop identification image with props separated from a desktop environment according to the difference value between the background image and the real-time depth image of the depth environment of the multiple RGBD depth sensors and the identification range threshold parameter;
s9, acquiring the coordinates of the center point of the prop, the length, the width, the convex defects and other geometric information of the prop identification image based on the morphological transformation, the contour search and the convex defect detection algorithm of the OpenCV vision library, and acquiring the height value of the prop by combining the depth information;
s10, comparing the length, width, height, convex defect and other geometric information of the prop with the data of the prop information base to obtain the type of the prop;
s11, acquiring the corresponding item projection coordinate position according to the coordinate of the item center point in the depth image and the mapping relation between the depth image coordinate and the projection coordinate;
and S12, sending information such as the type parameters and the projection coordinates of the props to interactive projection effect software by adopting an OSC protocol, and realizing accurate interactive projection effect based on the types of the props.
The interactive projection recognition software collects the depth data and the color data of each RGBD depth sensor in real time and carries out depth image splicing according to the arrangement position, and the image splicing method comprises the following steps of (a) arranging the number of cameras according to the horizontal direction and the vertical direction,) And image resolution (,) Initializing a stitched image having horizontal and vertical resolutions of,) And the positions are arranged according to the horizontal and vertical directions of each image sensor: (,) Setting the image to be in a region of interest (ROI) of a spliced image, and realizing the splicing operation of all the images, wherein (A))。
The depth environment background image is obtained by collecting multi-frame depth image data by each RGBD depth sensor under the condition that no prop is placed on the desktopAnd acquiring a background image of the intermediate value depth environment based on a corresponding coordinate depth intermediate value algorithm of multi-frame depth dataTherefore, the stability of the background image of the depth environment is improved.
(b) the width and height of the gray code structured light image corresponding to each RGBD depth sensor is (1) the multi-frame stripe image generated based on the gray code structured light algorithm comprises a plurality of frame stripe coded images which are arranged in the horizontal direction and the vertical direction and are alternately arranged in a bright-dark equidistant mode) Comprises the following steps:
wherein the content of the first and second substances,in order to project the horizontal resolution of the screen,is the projection screen vertical resolution. The display position of the multi-frame stripe-coded image on the projection screen (,) Comprises the following steps:
the structured light image is refreshed frame by frame at a frequency of every second and displayed at the corresponding position of the projection picture, the color camera corresponding to the RGBD depth sensor collects each frame of structured light image and is based on Gray code structured light projection and an image coordinate mapping algorithm, as shown in formula (5),
wherein the content of the first and second substances,is the coordinates of the pixels of the color image,and projecting the image coordinates of the area for Gray code structured light. According to the coordinate alignment mapping relationship between the depth image and the color image of the RGBD depth sensor, as shown in equation (6),
the mapping relationship between the coordinates of the depth image and the coordinates of the image of the light projection area of the gray code structure can be obtained, as shown in formula (7), (7)
then, according to the above coordinate transformation and mapping relationship, the mapping relationship between the corresponding projection resolution coordinate and the RGBD depth sensor depth image coordinate can be obtained, as shown in equation (8),
wherein the content of the first and second substances,is an index value ofThe pixel coordinates of the RGBD depth sensor depth image of (a),is the projection resolution coordinate to which its image pixel coordinates are mapped.
The prop identification image of the prop separated from the desktop environment is calculated as shown in formula (9),
wherein the content of the first and second substances,an image is identified for the prop with an index value of ID,for the purpose of real-time depth image data,,is the value range of the difference value between the background image data of the depth environment and the real-time depth image data, wherein,,。
while the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An automatic calibration desktop prop interaction system with multiple RGBD depth sensors is characterized by comprising a plurality of RGBD depth sensors, a plurality of projectors, a computer host, a rectangular desktop and props of different shapes, wherein the RGBD depth sensors and the projectors are connected with a USB interface and a video card video output interface corresponding to the computer host through data cables, the RGBD depth sensors are mounted on angle-adjustable and height-adjustable hangers above the desktop, so that the height of the RGBD depth sensors is kept consistent and parallel to the long edges of the desktop, and display pictures of the projectors are subjected to projection fusion through third-party projection fusion software; the interactive position calibration of a plurality of RGBD depth sensors is realized by adopting a gray code structured light-based color image automatic calibration algorithm and a depth image and color image coordinate mapping relation; after a depth environment background image between a plurality of RGBD depth sensors and a desktop is collected, calculating a difference image of a real-time depth image and the depth environment background image to position coordinates of the props with different desktop shapes in the fusion depth image and projection desktop coordinates and identify length, width and height values of the props; comparing the identification data of the props with different shapes on the desktop with the data of the prop information base to determine the shape types of all the props; and finally, the prop type and the desktop coordinates are sent to interactive projection effect software through an OSC communication protocol, so that the interactive projection effect of the experience is enhanced.
2. The system of claim 1, wherein the number of projectors and RGBD depth sensors is limited to a maximum number supported by a hardware configuration and an operating system of a computer host.
3. The automatic calibration desktop prop interaction system of the multiple RGBD depth sensors of claim 1, wherein a server model and a client model based on a TCP/IP communication protocol are adopted to virtualize the RGBD depth sensors connected to the node computer hosts in the LAN as local devices of a projection computer host, so as to implement a scheme for performing large-scene interaction recognition based on the multiple RGBD depth sensors.
4. The implementation method of the automatic calibration desktop prop interaction system of the multiple RGBD depth sensors in claim 1, comprising the following steps:
s1, connecting a plurality of projectors with a display card output interface of a computer host through video transmission cables, projecting pictures to a desktop space in a hoisting mode, and fusing the projected pictures through third-party projection fusion software;
s2, connecting a plurality of RGBD depth sensors with a USB interface of a computer host through data cables, and installing the RGBD depth sensors on angle-adjustable and height-adjustable hangers above a desktop, wherein the angle-adjustable and height-adjustable hangers correspond to the position of a projector, so that the heights of the RGBD depth sensors are consistent and parallel to the long edges of the desktop;
s3, acquiring depth data and color data of each RGBD depth sensor in real time by interactive projection recognition software, and splicing depth images according to the arrangement positions of the depth data and the color data;
s4, under the condition that no prop is placed on the desktop, the interactive projection recognition software collects a plurality of depth environment background images between the RGBD depth sensor and the desktop, and stores the background images;
s5, projecting multi-frame fringe images generated based on a Gray code structured light algorithm to corresponding areas of a desktop according to the positions of the RGBD depth sensors, and collecting data of each frame fringe image by using a color camera of the multi-frame fringe image to obtain a coordinate mapping relation between a fringe projection area and a color image;
s6, acquiring a coordinate mapping relation between the fringe projection area and the corresponding depth image according to the coordinate mapping relation between the depth image and the color image in each RGBD depth sensor;
s7, acquiring a prop identification image with props separated from a desktop environment according to the difference value between the background image and the real-time depth image of the depth environment of the multiple RGBD depth sensors and the identification range threshold parameter;
s8, acquiring the coordinates of the center point of the prop, the length, the width, the convex defects and other geometric information of the prop identification image based on the morphological transformation, the contour search and the convex defect detection algorithm of the OpenCV vision library, and acquiring the height value of the prop by combining the depth information;
s9, comparing the length, width, height, convex defect and other geometric information of the prop with the data of the prop information base to obtain the type of the prop;
s10, acquiring the corresponding item projection coordinate position according to the coordinate of the item center point in the depth image and the mapping relation between the depth image coordinate and the projection coordinate;
and S11, sending information such as the type parameters and the projection coordinates of the props to interactive projection effect software by adopting an OSC protocol, and realizing accurate interactive projection effect based on the types of the props.
5. The method as claimed in claim 4, wherein the interactive projection recognition software collects the depth data and color data of each RGBD depth sensor in real time, and performs depth image stitching according to the arrangement position thereof, wherein the image stitching method is based on the arrangement number of the cameras in the horizontal direction and the vertical direction (the number of the cameras in the horizontal direction and the number of the cameras in the vertical direction,) And image resolution (,) Initializing a stitched image having horizontal and vertical resolutions of,) And the positions are arranged according to the horizontal and vertical directions of each image sensor: (,) Setting the image to be in a region of interest (ROI) of a spliced image, and realizing the splicing operation of all the images, wherein (A))。
6. The method of claim 4, wherein the depth environment background image is obtained by collecting multiple frames of depth image data by each RGBD depth sensor when no prop is placed on a desktop of the desktopAnd based on corresponding coordinate depth intermediate values of multiple frames of depth dataAlgorithm, obtaining background image of intermediate value depth environmentTherefore, the stability of the background image of the depth environment is improved.
7. The method of claim 4, wherein the multi-frame stripe image generated based on the Gray code structured light algorithm is a multi-frame stripe encoded image comprising light and dark arranged in horizontal and vertical directions and equally spaced, and the width and height of the Gray code structured light image corresponding to each RGBD depth sensor (1) (() Comprises the following steps:
8. The display position of the multi-frame stripe-coded image on the projection screen (,) Comprises the following steps:
the structured light image is refreshed frame by frame at a frequency of every second and displayed at the corresponding position of the projection picture, the color camera corresponding to the RGBD depth sensor collects each frame of structured light image and is based on Gray code structured light projection and an image coordinate mapping algorithm, as shown in formula (5),
9. According to the coordinate alignment mapping relationship between the depth image and the color image of the RGBD depth sensor, as shown in equation (6),
the mapping relationship between the coordinates of the depth image and the coordinates of the image of the light projection area of the gray code structure can be obtained, as shown in formula (7),
then, according to the above coordinate transformation and mapping relationship, the mapping relationship between the corresponding projection resolution coordinate and the RGBD depth sensor depth image coordinate can be obtained, as shown in equation (8),
10. The method for implementing the automatic calibration desktop prop interaction system of multiple RGBD depth sensors in claim 4, wherein the prop identification image of the prop separated from the desktop environment is calculated as shown in equation (9),
wherein the content of the first and second substances,an image is identified for the prop with an index value of ID,for the purpose of real-time depth image data,,is the value range of the difference value between the background image data of the depth environment and the real-time depth image data, wherein,,。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011254165.3A CN112433641B (en) | 2020-11-11 | 2020-11-11 | Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011254165.3A CN112433641B (en) | 2020-11-11 | 2020-11-11 | Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112433641A true CN112433641A (en) | 2021-03-02 |
CN112433641B CN112433641B (en) | 2022-06-17 |
Family
ID=74700416
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011254165.3A Active CN112433641B (en) | 2020-11-11 | 2020-11-11 | Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112433641B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113409259A (en) * | 2021-06-09 | 2021-09-17 | 电子科技大学 | Image characteristic information-based precision workpiece stage inclination angle detection method |
CN114760450A (en) * | 2022-04-12 | 2022-07-15 | 昆明云岸数字科技有限公司 | Multi-element desktop projection interaction implementation method and device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000350230A (en) * | 1999-06-07 | 2000-12-15 | Olympus Optical Co Ltd | Image projection system |
JP2008061160A (en) * | 2006-09-04 | 2008-03-13 | Seiko Epson Corp | Multiprojection system |
CN103455141A (en) * | 2013-08-15 | 2013-12-18 | 无锡触角科技有限公司 | Interactive projection system and correction method of depth sensor and projector of interactive projection system |
US20140177909A1 (en) * | 2012-12-24 | 2014-06-26 | Industrial Technology Research Institute | Three-dimensional interactive device and operation method thereof |
CN107272910A (en) * | 2017-07-24 | 2017-10-20 | 武汉秀宝软件有限公司 | A kind of projection interactive method and system based on rock-climbing project |
CN107517366A (en) * | 2017-08-23 | 2017-12-26 | 上海喵呜信息科技有限公司 | Projector's image information method for automatic measurement based on RGBD |
CN107610236A (en) * | 2017-08-21 | 2018-01-19 | 武汉秀宝软件有限公司 | A kind of exchange method and system based on figure identification |
WO2019156731A1 (en) * | 2018-02-09 | 2019-08-15 | Bayerische Motoren Werke Aktiengesellschaft | Methods for object detection in a scene represented by depth data and image data |
CN110880161A (en) * | 2019-11-21 | 2020-03-13 | 大庆思特传媒科技有限公司 | Depth image splicing and fusing method and system for multi-host multi-depth camera |
CN110942092A (en) * | 2019-11-21 | 2020-03-31 | 大庆思特传媒科技有限公司 | Graphic image recognition method and recognition system |
-
2020
- 2020-11-11 CN CN202011254165.3A patent/CN112433641B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000350230A (en) * | 1999-06-07 | 2000-12-15 | Olympus Optical Co Ltd | Image projection system |
JP2008061160A (en) * | 2006-09-04 | 2008-03-13 | Seiko Epson Corp | Multiprojection system |
US20140177909A1 (en) * | 2012-12-24 | 2014-06-26 | Industrial Technology Research Institute | Three-dimensional interactive device and operation method thereof |
CN103455141A (en) * | 2013-08-15 | 2013-12-18 | 无锡触角科技有限公司 | Interactive projection system and correction method of depth sensor and projector of interactive projection system |
CN107272910A (en) * | 2017-07-24 | 2017-10-20 | 武汉秀宝软件有限公司 | A kind of projection interactive method and system based on rock-climbing project |
CN107610236A (en) * | 2017-08-21 | 2018-01-19 | 武汉秀宝软件有限公司 | A kind of exchange method and system based on figure identification |
CN107517366A (en) * | 2017-08-23 | 2017-12-26 | 上海喵呜信息科技有限公司 | Projector's image information method for automatic measurement based on RGBD |
WO2019156731A1 (en) * | 2018-02-09 | 2019-08-15 | Bayerische Motoren Werke Aktiengesellschaft | Methods for object detection in a scene represented by depth data and image data |
CN110880161A (en) * | 2019-11-21 | 2020-03-13 | 大庆思特传媒科技有限公司 | Depth image splicing and fusing method and system for multi-host multi-depth camera |
CN110942092A (en) * | 2019-11-21 | 2020-03-31 | 大庆思特传媒科技有限公司 | Graphic image recognition method and recognition system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113409259A (en) * | 2021-06-09 | 2021-09-17 | 电子科技大学 | Image characteristic information-based precision workpiece stage inclination angle detection method |
CN113409259B (en) * | 2021-06-09 | 2022-04-19 | 电子科技大学 | Image characteristic information-based precision workpiece stage inclination angle detection method |
CN114760450A (en) * | 2022-04-12 | 2022-07-15 | 昆明云岸数字科技有限公司 | Multi-element desktop projection interaction implementation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN112433641B (en) | 2022-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10194135B2 (en) | Three-dimensional depth perception apparatus and method | |
US6781618B2 (en) | Hand-held 3D vision system | |
US11816829B1 (en) | Collaborative disparity decomposition | |
US6917702B2 (en) | Calibration of multiple cameras for a turntable-based 3D scanner | |
CN101697233B (en) | Structured light-based three-dimensional object surface reconstruction method | |
US7456842B2 (en) | Color edge based system and method for determination of 3D surface topology | |
US20030066949A1 (en) | Method and apparatus for scanning three-dimensional objects | |
US20030071194A1 (en) | Method and apparatus for scanning three-dimensional objects | |
US20070195160A1 (en) | Angled axis machine vision system and method | |
TWI696906B (en) | Method for processing a floor | |
CN110390719A (en) | Based on flight time point cloud reconstructing apparatus | |
CN110827392B (en) | Monocular image three-dimensional reconstruction method, system and device | |
KR20110059506A (en) | System and method for obtaining camera parameters from multiple images and computer program products thereof | |
CN112433641B (en) | Implementation method for automatic calibration of desktop prop interaction system of multiple RGBD depth sensors | |
WO2022088881A1 (en) | Method, apparatus and system for generating a three-dimensional model of a scene | |
JP7092615B2 (en) | Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program | |
CN111915723A (en) | Indoor three-dimensional panorama construction method and system | |
CN109064533B (en) | 3D roaming method and system | |
CN110880161B (en) | Depth image stitching and fusion method and system for multiple hosts and multiple depth cameras | |
CN108645353B (en) | Three-dimensional data acquisition system and method based on multi-frame random binary coding light field | |
CN111914790B (en) | Real-time human body rotation angle identification method based on double cameras under different scenes | |
Agouris et al. | Automation and digital photogrammetric workstations | |
CN112433640B (en) | Automatic calibration interactive projection system of multiple image sensors and implementation method thereof | |
Chen et al. | Integration of multiple views for a 3-d indoor surveillance system | |
JPH08136222A (en) | Method and device for three-dimensional measurement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |