WO2017054115A1 - Projection method and system with augmented reality effect - Google Patents

Projection method and system with augmented reality effect Download PDF

Info

Publication number
WO2017054115A1
WO2017054115A1 PCT/CN2015/090974 CN2015090974W WO2017054115A1 WO 2017054115 A1 WO2017054115 A1 WO 2017054115A1 CN 2015090974 W CN2015090974 W CN 2015090974W WO 2017054115 A1 WO2017054115 A1 WO 2017054115A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
augmented reality
image
viewer
reality effect
Prior art date
Application number
PCT/CN2015/090974
Other languages
French (fr)
Chinese (zh)
Inventor
那庆林
黄彦
麦浩晃
Original Assignee
神画科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 神画科技(深圳)有限公司 filed Critical 神画科技(深圳)有限公司
Priority to PCT/CN2015/090974 priority Critical patent/WO2017054115A1/en
Publication of WO2017054115A1 publication Critical patent/WO2017054115A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor

Definitions

  • the present invention relates to projection systems, and more particularly to a projection method and system having an augmented reality effect.
  • the normal projection imaging surface is a smooth plane or a circular arc surface. That is to say, the projection display area of the projector is usually a plane or a circular arc surface, and there is no three-dimensional intersection surface or even a three-dimensional object or space within the projection display area. .
  • the present invention solves the problem that the conventional projection system can only project a virtual image in a substantially planar projection area, and cannot map a completely virtual image on the intersection surface.
  • the present invention provides a projection method having an augmented reality effect, in which two or more faces on which a projected image can fall are present in a projection display area, and at least two faces are opposite The intersection angle of the viewer is less than 180 degrees; the method includes the following steps: S1, preset a projection application screen corresponding to each surface; S2, process the preset projection application screen, Forming a picture for performing pattern correction based on the respective faces, and then performing projection by the projection lens unit And projecting, in the projected display area, a projected image having an expected augmented reality effect matched to the respective faces.
  • the projection application screen includes at least one overall image to be mapped on at least two intersecting surfaces; the whole image simultaneously displays a part of the screen on each surface, or continuously maps to each surface. And part of the continuous action is displayed on each face in turn.
  • the projection method of the present invention further includes the step of preset an optimal viewing angle, the viewer needs to view the best effect at the optimal viewing angle; and apply the screen to the preset projection in the step S2.
  • pattern correction is performed based on the optimal viewing angle, the respective faces, and the mutual relationship therebetween, and then mapped by the projection lens unit.
  • the projection method of the present invention further includes a spatial recognition step of performing three-dimensional spatial information collection on a projection space in which the projection display area is located to identify whether there are at least two intersecting surfaces and/or at least one solid object in the projection space. And performing three-dimensional spatial information reconstruction; when processing the preset projection application screen in the step S2, performing graphics based on the optimal viewing angle, the three-dimensional spatial information, and the relationship between them.
  • the correction is further mapped by the projection lens unit, and a projection image having an augmented reality effect matching the at least two intersecting faces and/or the at least one solid object is formed in the projection display area.
  • the spatial recognition step further includes recognizing an incident angle of the projected ray with two intersecting faces and/or at least one solid object angle, and performing three-dimensional spatial information reconstruction;
  • the image is corrected based on the information of the incident angles of the projected rays and the angles of the two intersecting surfaces and/or the at least one solid object, and then mapped by the projection lens unit.
  • a projected image having an augmented reality effect matching the at least two intersecting faces and/or the at least one solid object in the projection display area.
  • a projection application screen matching the at least two intersecting faces and/or at least one solid object is selected from the preset projection application screens in an automatic or manual manner.
  • each of the faces includes a base face on which the projected image is mapped, and one or more additional faces; each of the additional faces is one of the following solid objects: positive A combination of one or more of a square, a cuboid, a pyramid, a cone, a sphere, a cylinder, or a platform.
  • the projection method of the present invention further includes the step of adjusting a current viewing angle of the viewer, wherein the adjusting manner comprises a manual or an automatic manner, and when the preset projection application screen is processed in the step S2 to form a graphic correction screen, Processing is performed based on the current viewing angle information of the viewer.
  • the method when the current viewing angle of the viewer is automatically adjusted, the method further includes the step of recognizing the current viewing angle of the viewer, and when the preset projection application screen is processed to form the graphic correction screen in the step S2, Processing is also performed based on the identified viewer current view information.
  • the step of automatically recognizing the current viewing angle of the viewer using the sensor to position the viewer, including causing the viewer to wear the glasses with the position sensor, thereby locating the approximate spatial position of the viewer's eyes
  • the step of The preset projection application screen in S2 is processed according to the viewer's eye space position to form a graphic correction screen.
  • the present invention also provides a projection system having an augmented reality effect, comprising a projection lens unit and an image processing unit; wherein two or more projection images are projected in the projection display area formed by the projection lens unit The upper surface, wherein at least two of the faces are intersecting faces with an angle of intersection of less than 180 degrees with respect to the viewer; the system further includes a picture storage unit for storing preset projection application screens corresponding to the respective faces; The image processing unit processes the projection application screen to form a screen for performing pattern correction based on the respective faces, and then mapping by the projection lens unit, and forming and each of the projection display regions Face-matched projected image with the desired augmented reality effect.
  • the projection system of the present invention further includes: a spatial recognition unit configured to perform three-dimensional spatial information collection on a projection space in which the projection display area is located, to identify whether there are at least two intersecting surfaces and/or at least one in the projection space. Stereoscopic objects and performing three-dimensional spatial information reconstruction; wherein the image processing unit processes the selected projection application screen based on the three-dimensional spatial information to form based on the at least two intersecting surfaces and/or at least a picture in which a stereoscopic object is graphically corrected, and then mapped by the projection lens unit, and an augmented reality effect is formed in the projection display area to match the at least two intersecting surfaces and/or the at least one solid object Project an image.
  • a spatial recognition unit configured to perform three-dimensional spatial information collection on a projection space in which the projection display area is located, to identify whether there are at least two intersecting surfaces and/or at least one in the projection space. Stereoscopic objects and performing three-dimensional spatial information reconstruction; wherein the image processing unit processes the selected projection application screen based on the
  • the space recognition unit includes a monitoring module and an image analysis and recognition module; the monitoring module performs image signal collection on the projection space; the image analysis The identification module analyzes the monitoring image signal of the monitoring module to identify whether there are at least two intersecting surfaces and/or at least one solid object in the projection space, and performs three-dimensional spatial information reconstruction.
  • the monitoring module is a visible light based monitoring module or an infrared light based monitoring module;
  • the visible light based monitoring module comprises a single monitoring lens or a dual monitoring lens;
  • the infrared light based monitoring module comprises a single monitoring lens, Dual monitor lens, TOF sensor, or structured light sensor.
  • the projection system of the present invention further includes a viewing angle recognizing unit for recognizing a current viewing angle of the viewer, and the image processing unit is further configured based on the viewing angle recognizing unit when processing the projection application screen to form a graphic correction screen. The viewer's current perspective is processed.
  • the projection system of the present invention further includes glasses with a position sensor for the viewer to wear the belt for positioning the position of the viewer to position the viewer's eye space, the image processing unit to the projection
  • the application screen is processed to form a graphic correction screen
  • processing is also performed based on the viewer's eye space position to form a graphic correction screen.
  • the projection system of the present invention further includes an information processing unit and an interactive manipulation unit that can perform interactive manipulation based on the projection image, the interaction manipulation unit working in cooperation with the information processing unit to implement interaction based on the projection image Control.
  • the interactive manipulation unit includes a remote controller based on wireless communication, and the remote controller includes a combination of one or more of a mobile phone, a tablet computer, a game controller, or an air mouse; the remote controller uses 2.4G, Bluetooth, or WIFI communication mode to transmit interactive control signals.
  • the interactive manipulation unit includes an infrared remote-based external remote controller and an infrared monitoring lens;
  • the external remote controller may form an infrared spot in the projection display area, and the infrared monitoring lens captures the The infrared light spot,
  • the image processing unit displays a corresponding visible icon according to the position information of the infrared light spot, and controls the visible icon through the external remote controller to implement an interactive manipulation function.
  • the interactive manipulation unit includes a direct touch type interactive manipulation unit, and selects one of the interactive manipulation modes from the dual monitor lens mode, the TOF sensor mode, or the structured light sensor mode.
  • the interactive manipulation unit may share some or all of the functions of the monitoring module with the space recognition unit.
  • the actual object in the projection display area can be combined with the preset picture in the projector, and the image content is generated by the “Augmented Reality” (AR) technology and mapped onto the object, and combined with the projection interaction.
  • AR Augmented Reality
  • FIGS. 1A, 1B, and 1C are schematic diagrams of a projection system and a scene thereof in a preferred embodiment 1 of the present invention
  • FIGS. 2A, 2B, and 2C are schematic diagrams of a projection system and a scene thereof in a preferred embodiment 2 of the present invention
  • 3A and 3B are schematic diagrams showing a projection system and a scene thereof in a preferred embodiment 3 of the present invention
  • 4A is a schematic view showing an intersecting surface and an intersecting angle when a rectangular parallelepiped is placed;
  • Figure 4B is a schematic view showing the intersecting faces and the intersecting angles when the cylinder is placed;
  • 4C is a schematic view showing an intersecting surface and an intersecting angle when a hemisphere is placed;
  • FIG. 5A is a schematic view showing a window when two intersecting walls are projected into a projection area
  • 5B is a schematic view showing a triangular block when two intersecting walls are projected as a projection area
  • 5C is a schematic view of the three intersecting wall surfaces of the corner as a projection area
  • 6A is a schematic view showing a viewing effect of a viewer when a preset viewing angle is preset;
  • 6B is a schematic diagram of a viewing effect when the viewer does not preset the optimal viewing angle
  • FIG. 6C is a schematic view showing the viewing effect after adjusting the viewer angle on the basis of FIG. 6B.
  • Figure 7 is a schematic block diagram of a projection system in accordance with one embodiment of the present invention.
  • FIG. 7 is a schematic block diagram of a projection system according to a preferred embodiment of the present invention.
  • the projection system includes a projection lens unit, an image processing unit, and a picture storage unit.
  • the picture storage unit is configured to store a preset projection application screen. .
  • the projection display area formed by the projection lens unit has two or more faces on which the projection image is mapped, wherein at least two faces are intersecting faces of less than 180 degrees with respect to the viewer;
  • the image processing unit processes the projection application screen to form a graphic based on each surface
  • the corrected picture is then mapped by the projection lens unit, and a projected image having an expected augmented reality effect matched to each face is formed in the projected display area.
  • the projection system further includes a spatial recognition unit for performing three-dimensional spatial information collection on the projection space defined by the projection imaging unit and the projection display area thereof to identify whether there are at least two intersecting surfaces and/or at least one in the projection space.
  • a spatial recognition unit for automatically recognize the three-dimensional object or the intersecting surface in the projection space
  • the image processing unit processes the projection application image based on the three-dimensional spatial information output by the spatial recognition unit to form the at least two intersecting surfaces based on the at least two intersecting surfaces.
  • the at least one solid object is subjected to pattern correction, and then mapped by the projection lens unit, and a projection image matching the at least two intersecting surfaces and/or the at least one solid object is formed in the projection display area.
  • the space recognition unit includes a monitoring module and an image analysis and recognition module; the monitoring module performs image signal collection on the projection space; and the image analysis and recognition module analyzes and processes the monitoring image signal of the monitoring module. To identify whether there are at least two intersecting faces and/or at least one solid object in the projection space, and perform three-dimensional spatial information reconstruction.
  • the monitoring module may be a visible light based monitoring module or an infrared light based monitoring module; the visible light based monitoring module may be a single monitoring lens or a dual monitoring lens; the infrared light based monitoring module may be a single monitoring lens, dual monitoring Lens, TOF sensor, or structured light sensor.
  • the projection system further includes an information processing unit and an interactive manipulation unit that can perform interactive manipulation based on the projection image, and the interactive manipulation unit cooperates with the information processing unit to implement an interactive manipulation based on the projected image.
  • the interactive control unit can share some or all of the functions of its monitoring module with the space recognition unit.
  • the interactive control unit and the space recognition unit share the monitoring lens in the monitoring module.
  • the interactive control unit shares at least one of the monitoring lenses.
  • the projector 100 is located at the upper end, and the projection lens unit 102 can project a preset projection application screen to the projection display.
  • the projection display area 118 There is a rectangular parallelepiped 108 in the projection display area 118. From the perspective shown in the figure, the human eye can see the upper surface (A side) 110, the right side surface (C surface) 114 of the rectangular parallelepiped 108, And a front surface (B side) 112.
  • the other area than the rectangular parallelepiped 108 in the projection display area 118 is a normal plane (D plane) 116.
  • the projector 100 is provided with a picture storage unit for storing a preset projection application screen; after the projection application screen is processed by the image processing unit in the projector 100, a screen for performing pattern correction based on the rectangular parallelepiped 108 is formed, and then the projection lens unit 102 performs Projection is performed and a projected image having the desired augmented reality effect matched to the cuboid 108 is formed within the projection display area 118.
  • a button graphic, a wall graphic, and a staircase graphic are projected on the rectangular body 108.
  • the first button 122 is projected on the A surface 110
  • the wall pattern is projected on the B surface 112
  • the staircase pattern 120 is projected between the D surface and the C surface.
  • the B-face 112 is changed to a virtual wall surface, and a virtual button is generated on the A-side 110.
  • Another kind of picture that is expected to augment the reality effect is taken as an example of the step pattern 120 in FIG. 1B, in which a step pattern 120 needs to be projected between the D side and the C side, that is, the step pattern 120 is an overall image. It is simultaneously mapped to two intersecting faces, that is, the C face and the D face, and the staircase pattern 120 displays a part of the screen on the C face and the D face, respectively.
  • the graphics of the graphics library must be stretched in advance or in real time (or Partial stretching) and graphic correction form a step that augments the reality effect and generates a second button 124 next to the step. If there is no such rectangular object in the projection area, the projected image does not need to be stretched or corrected as described above, and the projected image will be completely different.
  • the preset projection application screen that has been subjected to the graphics correction may be projected to the rectangular parallelepiped 108 at the designated location.
  • the position of the rectangular parallelepiped 108 can also be changed, and then the position of the rectangular parallelepiped 108 is detected, and then the corresponding projection application screen is selected, and after being processed by the image processing unit, a graphic correction based on the rectangular parallelepiped 108 is formed, and then projected.
  • the lens unit is projected.
  • the invention further includes an information processing unit and an interactive manipulation unit that can perform interactive manipulation based on the projection image, and the interactive manipulation unit cooperates with the information processing unit to realize interactive manipulation based on the projection image.
  • a direct touch type interactive control unit specifically a dual monitor lens mode
  • Two monitoring lenses 104 and 106 are disposed on both sides of the projection lens unit 102, and the projector 100 is also provided in the projector 100.
  • An infrared light source is integrated that emits infrared light that covers the entire projected display area 118.
  • the infrared light source can also be mounted outside the projection lens to directly emit infrared light and cover the entire projection display area 118.
  • the two infrared monitoring lenses When a human finger enters the projection scene to touch the projected image, the two infrared monitoring lenses simultaneously transmit the captured image to the interactive algorithm module to calculate the spatial position of the finger, thereby realizing the stereo image touch operation.
  • the two infrared monitoring lenses need to be scaled and corrected first, and the image disparity map is acquired at the same time.
  • the spatial reconstruction is realized by image tracking, image segmentation and image recognition, and then the three-dimensional spatial position of the finger is calculated by the algorithm; The three-dimensional high and low undulating field shadows.
  • the two monitoring lenses 104 and 106 can monitor the spatial position information of the finger, and can further determine the currently performed touch operation by monitoring the motion track of the finger and the spatial coordinate information. For example, clicking, sliding, etc. are realized at different heights of the space.
  • the finger can click on the top of the game villain to activate or suspend the game villain, and the finger sliding direction is the direction of the game villain; when the game villain is about to board the A side 110 along the virtual ladder
  • the finger clicks on the first button 122 the map graphic of the A side 100 can be replaced with a swimming pool graphic, and the villain enters the virtual swimming pool and swims honest, as shown in FIG. 1C.
  • the finger clicks the second button 124 the swimming pool on the A side will disappear and be replaced with the original graphic, and the finger can be used to control the villain to leave to other places.
  • the game villain also has the expected augmented reality effect.
  • the game villain In the process of boarding the A side along the virtual ladder from the D plane, the game villain is an overall image, which is first mapped on the D surface and then mapped to the virtual ladder. The upper and the second are mapped on the A surface, that is, continuously mapped to each surface, and a part of the continuous motion is sequentially displayed on each surface.
  • the direct touch interactive control unit in this embodiment may also be a TOF (Time of Flight) sensor mode or a structured light sensor mode.
  • TOF Time of Flight
  • the basic principle of the TOF sensor is to receive the light returned from the object by the sensor, and to obtain the target distance by detecting the flight (round trip) time of the light pulse.
  • the TOF sensor is similar to the general machine vision imaging process. It consists of several units, such as a light source, an optical component, a sensor, a control circuit, and a processing circuit.
  • the TOF sensor is obtained by the target distance acquired by the in-and-out-reflection detection. .
  • the structured light sensor works by using continuous light (near-infrared) to encode the measurement space, reading the encoded light through the sensor, and decoding it by the wafer operation to produce a depth image.
  • the light source is not a periodic two-dimensional image coding, but a "body coding" with three-dimensional depth.
  • This kind of light source is called Laser Speckle, which is a random diffraction spot formed when the laser is irradiated to a rough object or penetrates the frosted glass. These speckles are highly random and will change pattern with distance. Any two spots in the space will have different patterns, which is equivalent to marking the entire space, so any object enters the space and When moving, the position of the object can be recorded exactly.
  • a space recognition unit may be further provided for performing three-dimensional spatial information collection on the projection space where the projection display area is located, to identify whether there are at least two intersecting surfaces and/or at least one solid object in the projection space. And reconstructing the three-dimensional spatial information; that is, even if the rectangular parallelepiped shown in FIG. 1A is randomly placed, the subsequent steps can be automatically identified and completed.
  • the image processing unit processes the selected projection application screen based on the three-dimensional spatial information to form a screen for performing pattern correction based on each surface of the rectangular parallelepiped, and then is mapped by the projection lens unit, and is formed in the projection display area.
  • the cuboid matches a projected image with an augmented reality effect.
  • the space recognition unit includes a monitoring module and an image analysis and recognition module; the monitoring module performs image signal collection on the projection space; and the image analysis and recognition module analyzes and processes the monitoring image signal of the monitoring module to identify whether at least the projection space has at least Two intersecting faces and/or at least one solid object and performing three-dimensional spatial information reconstruction.
  • the interactive control unit in the first embodiment can also be used as part of the spatial recognition unit.
  • the monitoring module may be a visible light based monitoring module or an infrared light based monitoring module; the visible light based monitoring module may be a single monitoring lens or a dual monitoring lens; the infrared light based monitoring module may be a single monitoring lens, a dual monitoring lens, TOF sensor, or structured light sensor.
  • FIG. 2A, FIG. 2B and FIG. 2C The second embodiment of the present invention is shown in FIG. 2A, FIG. 2B and FIG. 2C, wherein the projector 200 is located at the upper end, and the projection lens unit 202 can project a preset projection application screen to the projection display area 218.
  • the projection lens unit 202 can project a preset projection application screen to the projection display area 218.
  • Other areas than the tapered object in the projected display area 118 are conventional planes (D-face) 216.
  • the tapered object 208 can be mapped into a pyramid, and the rendered texture is first matched from the graphics library, and then a reasonable stretch is calculated according to the position, the angle, the height of the projector, and the like of the tapered object. (or partial stretching) and graphic correction to generate a solid pyramid of augmented reality effects, creating a virtual pedestal 240 below the A and B faces of the pyramid.
  • an interactive control based on a wireless communication remote control is used, specifically an external remote controller 230.
  • the remote controller can be a conventional game controller with front, rear, left, right and other moving buttons and various action buttons.
  • the commands of the remote control can be transmitted to the projector via 2.4G, Bluetooth, or WIFI communication.
  • the remote control 230 shown in the figure is a double mode handle, and the player can select the single mode or the double mode, and the player can control the movement of the game villain in the projection screen, and the game villain can establish the protection circle 242 around the pyramid, or climb Climb the pyramid, you can also graffiti on the pyramid.
  • FIG. 3A and FIG. 3B The third embodiment of the present invention is shown in FIG. 3A and FIG. 3B, wherein the projector 300 is located at the upper end, and the projection lens unit 302 can project a preset projection application screen into the projection display area 318.
  • the projection lens unit 302 can project a preset projection application screen into the projection display area 318.
  • Other areas than the cube 308 in the projection display area 318 are the regular plane (D plane) 316.
  • the A side of the cube 308 is mapped into a city head, and the C side 314 is mapped into a city gate 372.
  • the two sides of the gate are augmented reality.
  • Players are free to choose Siege Party 376 or Shoucheng Party 374.
  • the player can use various resources to defend the city, such as bows and arrows, rolling wood, meteorites, oil pans, etc.; when the siege strength is reduced, the defending city can open the gate to rush out;
  • the irritating nature of the game can also enhance the reality around the city walls into protective guards such as moats and city gate suspension bridges. This game still requires pre-graphic correction of objects placed in the projection area so that the mapped image can generate the background and plot needed in the game.
  • the infrared light-based external remote controller 360 is used for the touch operation.
  • the remote controller emits infrared light of a specific wavelength to the projection screen, the projector monitors the lens to recognize, and generates an indication icon 362 at a relative position of the projection display area, and performs front, rear, left and right movement and other corresponding interactive operations according to the remote controller instruction.
  • the infrared remote control refer to the contents of the Chinese invention patent application filed as CN104834394A and CN104020897A.
  • the interactive manipulation methods in the foregoing two patent applications can be used in this embodiment.
  • an interactive display system which comprises an infrared light source and a monitoring device;
  • the monitoring device comprises an interactive module and an interactive control unit;
  • the interactive module comprises a monitoring lens, a beam splitting component, a visible light monitoring component and an infrared light monitoring component;
  • the visible light projection image captured by the monitoring lens is imaged onto the visible light monitoring component to form a first imaging image;
  • the infrared light spot captured by the monitoring lens is imaged onto the infrared light monitoring component to form a second imaging image;
  • the control unit is connected with the visible light monitoring component and the infrared light monitoring component, and outputs position information of the infrared spot to the display unit, and the display unit displays the corresponding visible icon according to the position information of the infrared spot, and the visible icon can realize click, drag, zoom in and out Wait for the operation to achieve the function of human-computer interaction.
  • the monitoring lens 304 can be used as a spatial recognition unit, and can detect
  • an interactive display system comprising a main control unit for performing information processing, a display unit for generating a display screen according to the received display information, one or more Remote control unit and monitoring module.
  • the main control unit is in communication with the display unit and the monitoring module.
  • Each remote control unit includes an infrared emitting module, and the infrared transmitting module is configured to emit infrared light of two different wavelengths to the display screen to generate an infrared spot.
  • the monitoring module includes a monitoring lens for capturing an infrared spot in the display screen, a first beam splitting element, and a first infrared monitoring component and a second infrared monitoring component for respectively inputting the first infrared light and the second infrared light of different wavelengths .
  • a square object 410 is placed on the projection table.
  • four sets of two-two intersecting planes can be seen, that is, the front front 412 intersects the desktop 408, and the right side 413 is
  • the table top 408 intersects, the upper surface 411 intersects the front side 412, and the upper surface 411 intersects the front side 412.
  • the best visual intersecting plane refers to the intersecting plane with an included angle of less than 180°. In Fig. 4A, the angle 414 and the angle 415 are less than 180°, which are the best visual intersecting planes.
  • pattern correction processing or 3D image mapping can be performed; and angle 416 and angle 417 are greater than 180° (up to 270°), which is not the best visual intersection plane, and cannot be subjected to pattern correction processing or 3D image. Mapping.
  • FIG. 4B shows a cylindrical object 420 placed on a projection table, producing two sets of two-two intersecting curved surfaces, namely a cylindrical surface 422 (the front half cylindrical surface visible to the human eye) intersecting the table top 408, and an upper surface 421 and a cylindrical surface 422. Intersecting, where angle 425 is less than 180°, is the best visual intersecting surface; and angle 426 is greater than 180°, which is not the best visual intersecting surface.
  • FIG. 4C shows that the hemispherical object 430 is placed on the projection table, and only one set of two intersecting faces, that is, the spherical surface 431 intersects the table top 408, and the angle 432 is less than 180° in the figure, which belongs to the best visual intersecting curved surface, and can be subjected to graphic correction processing or 3D image mapping.
  • Fig. 5A is a wall projection embodiment, and the intersecting wall surface is a common space environment in daily life.
  • the projector projects an image onto the intersecting walls, forming a projected image 510 at the left and right wall 506 and wall 508.
  • spatial three-dimensional position information of the two wall surfaces 506 and 508 with respect to the projector can be calculated, and the image processing unit performs image correction on the image information on the wall surfaces 506 and 508, and projects half of the two front images.
  • the fan window forms a half-open window 512, as shown in Figure 5A; or an augmented reality image such as a stereoscopic image, such as a virtual triangle image 516 of 5B, can be projected between the two wall corners.
  • Figure 5C is a corner projection embodiment in which the projector projects an image onto a corner of the room to image the image on top surface 504 and left and right wall surfaces 506, 508.
  • the projection surface without the graphic correction processing is represented by a broken line in the image of the three walls, and the image appears to be an irregular polygon.
  • spatial three-dimensional position information of three faces can be calculated, and the image processing unit performs image information of three faces.
  • Graphic correction, augmented reality projection mapping is performed at the corners (ie, intersections of the three sides A, B, and C) to form a virtual bird's nest image 518.
  • the projection system of the invention has a preset optimal viewing angle, and the viewer needs to view the best effect at the optimal viewing angle; when processing the preset projection application screen, the optimal viewing angle and each surface are based on And the interrelationship between them is graphically corrected and then mapped by the projection lens unit.
  • the current viewing angle of the viewer can also be adjusted as needed, either manually or automatically.
  • the method further includes the step of identifying the viewer's current viewing angle, and specifically using the sensor to position the viewer, including causing the viewer to wear the glasses with the position sensor to position the approximate spatial position of the viewer's eyes.
  • the projection application screen for the preset is processed according to the viewer's eye space position to form a graphic correction screen.
  • the desktop 602 has a rectangular parallelepiped 604.
  • the projected image of the projector 600 is a rectangle 606.
  • the image processing unit passes through the image processing unit. After the graphics correction processing is performed on the preset projection image, the projection image seen by the viewer is still rectangular.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Projection Apparatus (AREA)

Abstract

Disclosed are a projection method and system with an augmented reality effect. In a projection display region formed by projection lens units, there are two or more planes for projection images to fall on, wherein at least two planes are intersection planes with intersection angles of less than 180 degrees with respect to a viewer. After being processed by an image processing unit, a pre-set projection application picture forms a picture performing graphic rectification based on each of the planes, and is then mapped by the projection lens units and forms, in the projection display region, a projection image matching each of the planes and having an expected augmented reality effect . The system also comprises an interaction operation unit capable of performing an interaction operation based on the projection image.

Description

具有增强现实效果的投影方法及系统Projection method and system with augmented reality effect 技术领域Technical field
本发明涉及投影系统,更具体地说,涉及一种具有增强现实效果的投影方法及系统。The present invention relates to projection systems, and more particularly to a projection method and system having an augmented reality effect.
背景技术Background technique
通常的投影成像面为平滑的平面或圆弧面,也就是说,投影机的投影显示区域通常是平面或圆弧面,在投影显示区域范围内不会有三维的相交面乃至立体物体或空间。The normal projection imaging surface is a smooth plane or a circular arc surface. That is to say, the projection display area of the projector is usually a plane or a circular arc surface, and there is no three-dimensional intersection surface or even a three-dimensional object or space within the projection display area. .
随着投影技术应用的不断拓展,具有立体场景投影功能的产品也陆续面市,例如,有些剧场的场景布置或者灯光秀节目中,就是用这种投影机结合被投射物体的形状投出虚拟场景。但是,这种虚拟场景通常仍然是大致平面的投射区域,不会考虑如何在立体物体上或者立体空间中,经过画面校正在相交面上投射出模拟现实场景的图像乃至生成虚拟物体,另外因为在剧场和灯光秀等节目中,观众和投影画面距离很远,通常不考虑预设狭小的最佳观看视角来设计增强现实的图像,而且其通常不具有互动操控功能。With the continuous expansion of the application of projection technology, products with stereoscopic scene projection functions have also been introduced one after another. For example, in some theater scene layouts or lighting show programs, such projectors are used to project virtual scenes in combination with the shape of the projected object. However, such a virtual scene is usually still a substantially planar projection area, and does not consider how to project an image of a simulated real scene or a virtual object on the intersecting surface through a picture correction on a solid object or a stereoscopic space, and because In theaters and light shows, the audience and the projection screen are far apart, and the augmented reality image is usually designed without considering the preset narrow viewing angle, and it usually does not have interactive manipulation functions.
发明内容Summary of the invention
针对现有技术的上述缺陷,本发明要解决现有投影系统只能在大致平面的投射区域内投出虚拟图像,而不能在相交面上映射出完全虚拟的图像的问题。In view of the above-mentioned drawbacks of the prior art, the present invention solves the problem that the conventional projection system can only project a virtual image in a substantially planar projection area, and cannot map a completely virtual image on the intersection surface.
为解决上述技术问题,本发明提供一种具有增强现实效果的投影方法,其中,在投影显示区域内有两个或多个可供投影图像落于其上的面,且至少两个面是相对于观看者的相交角度小于180度的相交面;该方法包括以下步骤:S1、预设与所述各个面对应的投影应用画面;S2、对所述预设的投影应用画面进行处理,以形成基于所述各个面进行图形校正的画面,再由投影镜头单元进行映 射,并在所述投影显示区域内形成与所述各个面匹配的具有预期增强现实效果的投影图像。In order to solve the above technical problem, the present invention provides a projection method having an augmented reality effect, in which two or more faces on which a projected image can fall are present in a projection display area, and at least two faces are opposite The intersection angle of the viewer is less than 180 degrees; the method includes the following steps: S1, preset a projection application screen corresponding to each surface; S2, process the preset projection application screen, Forming a picture for performing pattern correction based on the respective faces, and then performing projection by the projection lens unit And projecting, in the projected display area, a projected image having an expected augmented reality effect matched to the respective faces.
本发明的投影方法中,所述投影应用画面中包含至少一个需映射于至少两个相交面上的整体图像;所述整体图像在各个面上同时分别显示一部分画面,或者连续映射于各个面上并依次在各个面上显示连续动作的一部分。In the projection method of the present invention, the projection application screen includes at least one overall image to be mapped on at least two intersecting surfaces; the whole image simultaneously displays a part of the screen on each surface, or continuously maps to each surface. And part of the continuous action is displayed on each face in turn.
本发明的投影方法中,还包括预设最佳观看角度的步骤,观看者需在所述最佳观看角度才能观看到最佳效果;在所述步骤S2中对所述预设的投影应用画面进行处理时,基于所述最佳观看角度、所述各个面、以及它们之间的相互关系进行图形校正,再由投影镜头单元进行映射。The projection method of the present invention further includes the step of preset an optimal viewing angle, the viewer needs to view the best effect at the optimal viewing angle; and apply the screen to the preset projection in the step S2. When processing is performed, pattern correction is performed based on the optimal viewing angle, the respective faces, and the mutual relationship therebetween, and then mapped by the projection lens unit.
本发明的投影方法中,还包括空间识别步骤,对所述投影显示区域所在投影空间进行三维空间信息采集,以识别所述投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构;在所述步骤S2中对所述预设的投影应用画面进行处理时,基于所述最佳观看角度、所述三维空间信息、以及它们之间的相互关系进行图形校正,再由投影镜头单元进行映射,并在所述投影显示区域内形成与所述至少两个相交面和/或至少一个立体物体匹配的具有增强现实效果的投影图像。The projection method of the present invention further includes a spatial recognition step of performing three-dimensional spatial information collection on a projection space in which the projection display area is located to identify whether there are at least two intersecting surfaces and/or at least one solid object in the projection space. And performing three-dimensional spatial information reconstruction; when processing the preset projection application screen in the step S2, performing graphics based on the optimal viewing angle, the three-dimensional spatial information, and the relationship between them The correction is further mapped by the projection lens unit, and a projection image having an augmented reality effect matching the at least two intersecting faces and/or the at least one solid object is formed in the projection display area.
本发明的投影方法中,所述空间识别步骤,还包括对投影光线分别与两个相交面和/或至少一个立体物体角度的入射角度的识别,并进行三维空间信息重构;在所述步骤S2中对所述预设的投影应用画面进行处理时,基于所述投影光线分别与两个相交面和/或至少一个立体物体角度的入射角度的信息进行图形校正,再由投影镜头单元进行映射,并在所述投影显示区域内形成与所述至少两个相交面和/或至少一个立体物体匹配的具有增强现实效果的投影图像。In the projection method of the present invention, the spatial recognition step further includes recognizing an incident angle of the projected ray with two intersecting faces and/or at least one solid object angle, and performing three-dimensional spatial information reconstruction; When the preset projection application screen is processed in S2, the image is corrected based on the information of the incident angles of the projected rays and the angles of the two intersecting surfaces and/or the at least one solid object, and then mapped by the projection lens unit. And forming a projected image having an augmented reality effect matching the at least two intersecting faces and/or the at least one solid object in the projection display area.
本发明的投影方法中,在所述步骤S1中,以自动或手动方式从预设的投影应用画面中选择与所述至少两个相交面和/或至少一个立体物体匹配的投影应用画面。In the projection method of the present invention, in the step S1, a projection application screen matching the at least two intersecting faces and/or at least one solid object is selected from the preset projection application screens in an automatic or manual manner.
本发明的投影方法中,所述各个面中包括可供投影图像映射于其上的一个基础面、一个或多个附加面;每个所述附加面是下列立体物体中的一个面:正 方体、长方体、棱锥体、圆锥体、球体、圆柱体、或台体中一种或多种的组合。In the projection method of the present invention, each of the faces includes a base face on which the projected image is mapped, and one or more additional faces; each of the additional faces is one of the following solid objects: positive A combination of one or more of a square, a cuboid, a pyramid, a cone, a sphere, a cylinder, or a platform.
本发明的投影方法中,还包括调整观看者当前视角的步骤,调整方式包括手动或自动方式,在所述步骤S2中对所述预设的投影应用画面进行处理以形成图形校正画面时,还基于所述观看者当前视角信息进行处理。The projection method of the present invention further includes the step of adjusting a current viewing angle of the viewer, wherein the adjusting manner comprises a manual or an automatic manner, and when the preset projection application screen is processed in the step S2 to form a graphic correction screen, Processing is performed based on the current viewing angle information of the viewer.
本发明的投影方法中,在自动调整观看者当前视角时,还包括识别观看者当前视角的步骤,在所述步骤S2中对所述预设的投影应用画面进行处理以形成图形校正画面时,还基于所述识别的观看者当前视角信息进行处理。In the projection method of the present invention, when the current viewing angle of the viewer is automatically adjusted, the method further includes the step of recognizing the current viewing angle of the viewer, and when the preset projection application screen is processed to form the graphic correction screen in the step S2, Processing is also performed based on the identified viewer current view information.
本发明的投影方法中,在自动识别观看者当前视角的步骤,使用传感器定位观看者位置,包括使观看者佩戴带有位置传感器的眼镜,从而定位观看者眼睛的大致空间位置,在所述步骤S2中对所述预设的投影应用画面根据观看者眼睛空间位置进行处理以形成图形校正画面。In the projection method of the present invention, in the step of automatically recognizing the current viewing angle of the viewer, using the sensor to position the viewer, including causing the viewer to wear the glasses with the position sensor, thereby locating the approximate spatial position of the viewer's eyes, in the step The preset projection application screen in S2 is processed according to the viewer's eye space position to form a graphic correction screen.
本发明还提供一种具有增强现实效果的投影系统,包括投影镜头单元、图像处理单元;其中,所述投影镜头单元所形成的投影显示区域内有两个或多个可供投影图像映射于其上的面,其中至少两个面是相对于观看者的相交角度小于180度的相交面;该系统中还包括用于存储与所述各个面对应的预设投影应用画面的画面存储单元;所述图像处理单元对所述投影应用画面进行处理,以形成基于所述各个面进行图形校正的画面,再由所述投影镜头单元进行映射,并在所述投影显示区域内形成与所述各个面匹配的具有预期增强现实效果的投影图像。The present invention also provides a projection system having an augmented reality effect, comprising a projection lens unit and an image processing unit; wherein two or more projection images are projected in the projection display area formed by the projection lens unit The upper surface, wherein at least two of the faces are intersecting faces with an angle of intersection of less than 180 degrees with respect to the viewer; the system further includes a picture storage unit for storing preset projection application screens corresponding to the respective faces; The image processing unit processes the projection application screen to form a screen for performing pattern correction based on the respective faces, and then mapping by the projection lens unit, and forming and each of the projection display regions Face-matched projected image with the desired augmented reality effect.
本发明的投影系统中,还包括:空间识别单元,用于对所述投影显示区域所在投影空间进行三维空间信息采集,以识别所述投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构;其中,所述图像处理单元基于所述三维空间信息对所选择的所述投影应用画面进行处理,以形成基于所述至少两个相交面和/或至少一个立体物体进行图形校正的画面,再由所述投影镜头单元进行映射,并在所述投影显示区域内形成与所述至少两个相交面和/或至少一个立体物体匹配的具有增强现实效果的投影图像。The projection system of the present invention further includes: a spatial recognition unit configured to perform three-dimensional spatial information collection on a projection space in which the projection display area is located, to identify whether there are at least two intersecting surfaces and/or at least one in the projection space. Stereoscopic objects and performing three-dimensional spatial information reconstruction; wherein the image processing unit processes the selected projection application screen based on the three-dimensional spatial information to form based on the at least two intersecting surfaces and/or at least a picture in which a stereoscopic object is graphically corrected, and then mapped by the projection lens unit, and an augmented reality effect is formed in the projection display area to match the at least two intersecting surfaces and/or the at least one solid object Project an image.
本发明的投影系统中,所述所述空间识别单元包括监控模块、以及图像分析识别模块;所述监控模块对所述投影空间进行图像信号搜集;所述图像分析 识别模块对所述监控模块的监控图像信号进行分析处理,以识别所述投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构。In the projection system of the present invention, the space recognition unit includes a monitoring module and an image analysis and recognition module; the monitoring module performs image signal collection on the projection space; the image analysis The identification module analyzes the monitoring image signal of the monitoring module to identify whether there are at least two intersecting surfaces and/or at least one solid object in the projection space, and performs three-dimensional spatial information reconstruction.
本发明的投影系统中,所述监控模块是基于可见光的监控模块或基于红外光的监控模块;基于可见光的监控模块包括单监控镜头或双监控镜头;基于红外光的监控模块包括单监控镜头、双监控镜头、TOF传感器、或结构光传感器。In the projection system of the present invention, the monitoring module is a visible light based monitoring module or an infrared light based monitoring module; the visible light based monitoring module comprises a single monitoring lens or a dual monitoring lens; the infrared light based monitoring module comprises a single monitoring lens, Dual monitor lens, TOF sensor, or structured light sensor.
本发明的投影系统中,还包括用于识别观看者当前视角的视角识别单元,所述图像处理单元对所述投影应用画面进行处理以形成图形校正画面时,还基于所述视角识别单元所得出的观看者当前视角进行处理。The projection system of the present invention further includes a viewing angle recognizing unit for recognizing a current viewing angle of the viewer, and the image processing unit is further configured based on the viewing angle recognizing unit when processing the projection application screen to form a graphic correction screen. The viewer's current perspective is processed.
本发明的投影系统中,还包括供观看者佩戴带的带有位置传感器的眼镜,所述传感器用于定位观看者的位置,从而定位观看者眼睛空间位置,所述图像处理单元对所述投影应用画面进行处理以形成图形校正画面时,还基于观看者眼睛空间位置进行处理以形成图形校正画面。The projection system of the present invention further includes glasses with a position sensor for the viewer to wear the belt for positioning the position of the viewer to position the viewer's eye space, the image processing unit to the projection When the application screen is processed to form a graphic correction screen, processing is also performed based on the viewer's eye space position to form a graphic correction screen.
本发明的投影系统中,还包括信息处理单元、以及可基于所述投影图像进行互动操控的互动操控单元,所述互动操控单元与所述信息处理单元配合工作以实现基于所述投影图像的互动操控。The projection system of the present invention further includes an information processing unit and an interactive manipulation unit that can perform interactive manipulation based on the projection image, the interaction manipulation unit working in cooperation with the information processing unit to implement interaction based on the projection image Control.
本发明的投影系统中,所述互动操控单元包括基于无线通信的遥控器,所述遥控器包括手机、平板电脑、游戏手柄、或空中鼠标中一种或多种的组合;所述遥控器使用2.4G、蓝牙、或者WIFI通信方式传输互动操控信号。In the projection system of the present invention, the interactive manipulation unit includes a remote controller based on wireless communication, and the remote controller includes a combination of one or more of a mobile phone, a tablet computer, a game controller, or an air mouse; the remote controller uses 2.4G, Bluetooth, or WIFI communication mode to transmit interactive control signals.
本发明的投影系统中,所述互动操控单元包括基于红外光的外部遥控器及红外监控镜头;所述外部遥控器可在所述投影显示区域内形成红外光斑,所述红外监控镜头捕捉所述红外光斑,所述图像处理单元根据所述红外光斑的位置信息显示相应的可见图标,并通过所述外部遥控器控制所述可见图标以实现互动操控功能。In the projection system of the present invention, the interactive manipulation unit includes an infrared remote-based external remote controller and an infrared monitoring lens; the external remote controller may form an infrared spot in the projection display area, and the infrared monitoring lens captures the The infrared light spot, the image processing unit displays a corresponding visible icon according to the position information of the infrared light spot, and controls the visible icon through the external remote controller to implement an interactive manipulation function.
本发明的投影系统中,所述互动操控单元包括直接触控式互动操控单元,并从双监控镜头模式、TOF传感器模式、或者结构光传感器模式选择其中的一种互动操控模式。In the projection system of the present invention, the interactive manipulation unit includes a direct touch type interactive manipulation unit, and selects one of the interactive manipulation modes from the dual monitor lens mode, the TOF sensor mode, or the structured light sensor mode.
本发明的投影系统中,所述互动操控单元可以与所述空间识别单元共用所述监控模块的部分或全部功能。 In the projection system of the present invention, the interactive manipulation unit may share some or all of the functions of the monitoring module with the space recognition unit.
通过本发明的技术方案,可将投影显示区域内的实际物体与投影机内预设的画面结合,采用“增强现实”(AR)的技术生成画面内容并映射到物体上,同时结合投影互动的技术,可生成特殊的投影游戏,以增加游戏的真实感,形成一种目前市场上还没有的独特游戏种类。According to the technical solution of the present invention, the actual object in the projection display area can be combined with the preset picture in the projector, and the image content is generated by the “Augmented Reality” (AR) technology and mapped onto the object, and combined with the projection interaction. Technology that generates special projection games to increase the realism of the game, creating a unique game that is not currently available on the market.
附图说明DRAWINGS
下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below in conjunction with the accompanying drawings and embodiments, in which:
图1A、图1B、图1C是本发明优选实施例一中的投影系统及其场景示意图;1A, 1B, and 1C are schematic diagrams of a projection system and a scene thereof in a preferred embodiment 1 of the present invention;
图2A、图2B、图2C是本发明优选实施例二中的投影系统及其场景示意图;2A, 2B, and 2C are schematic diagrams of a projection system and a scene thereof in a preferred embodiment 2 of the present invention;
图3A、图3B是本发明优选实施例三中的投影系统及其场景示意图;3A and 3B are schematic diagrams showing a projection system and a scene thereof in a preferred embodiment 3 of the present invention;
图4A是放置长方体时的相交面及相交角度示意图;4A is a schematic view showing an intersecting surface and an intersecting angle when a rectangular parallelepiped is placed;
图4B是放置圆柱体时的相交面及相交角度示意图;Figure 4B is a schematic view showing the intersecting faces and the intersecting angles when the cylinder is placed;
图4C是放置半球体时的相交面及相交角度示意图;4C is a schematic view showing an intersecting surface and an intersecting angle when a hemisphere is placed;
图5A是以两个相交墙面为投影区域投影显示窗户时的示意图;FIG. 5A is a schematic view showing a window when two intersecting walls are projected into a projection area;
图5B是以两个相交墙面为投影区域投影显示三角块时的示意图;5B is a schematic view showing a triangular block when two intersecting walls are projected as a projection area;
图5C是以墙角的三个相交墙面为投影区域时的示意图;5C is a schematic view of the three intersecting wall surfaces of the corner as a projection area;
图6A是观看者在预设最佳观看角度时的观看效果示意图;6A is a schematic view showing a viewing effect of a viewer when a preset viewing angle is preset;
图6B是观看者不在预设最佳观看角度时的观看效果示意图;6B is a schematic diagram of a viewing effect when the viewer does not preset the optimal viewing angle;
图6C是在图6B的基础上调整了观看者角度后的观看效果示意图6C is a schematic view showing the viewing effect after adjusting the viewer angle on the basis of FIG. 6B.
图7是本发明一个实施例中投影系统的原理框图。Figure 7 is a schematic block diagram of a projection system in accordance with one embodiment of the present invention.
具体实施方式detailed description
如图7所示为本发明一个优选实施例中投影系统的原理框图,该投影系统中包括投影镜头单元、图像处理单元、画面存储单元;其中,画面存储单元用于存储预设的投影应用画面。FIG. 7 is a schematic block diagram of a projection system according to a preferred embodiment of the present invention. The projection system includes a projection lens unit, an image processing unit, and a picture storage unit. The picture storage unit is configured to store a preset projection application screen. .
工作时,投影镜头单元所形成的投影显示区域内有两个或多个可供投影图像映射于其上的面,其中至少两个面是相对于观看者的相交角度小于180度的相交面;图像处理单元对投影应用画面进行处理,以形成基于各个面进行图形 校正的画面,再由投影镜头单元进行映射,并在投影显示区域内形成与各个面匹配的具有预期增强现实效果的投影图像。In operation, the projection display area formed by the projection lens unit has two or more faces on which the projection image is mapped, wherein at least two faces are intersecting faces of less than 180 degrees with respect to the viewer; The image processing unit processes the projection application screen to form a graphic based on each surface The corrected picture is then mapped by the projection lens unit, and a projected image having an expected augmented reality effect matched to each face is formed in the projected display area.
该投影系统中还包括空间识别单元,用于对由投影成像单元及其投影显示区域所限定的投影空间进行三维空间信息采集,以识别投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构。也就是说,空间识别单元可以自动识别投影空间内的立体物或相交面,图像处理单元基于空间识别单元所输出的三维空间信息对投影应用画面进行处理,以形成基于所述至少两个相交面和/或至少一个立体物体进行图形校正的画面,再由投影镜头单元进行映射,并在投影显示区域内形成与所述至少两个相交面和/或至少一个立体物体匹配的投影图像。The projection system further includes a spatial recognition unit for performing three-dimensional spatial information collection on the projection space defined by the projection imaging unit and the projection display area thereof to identify whether there are at least two intersecting surfaces and/or at least one in the projection space. Three-dimensional objects and three-dimensional spatial information reconstruction. That is, the spatial recognition unit can automatically recognize the three-dimensional object or the intersecting surface in the projection space, and the image processing unit processes the projection application image based on the three-dimensional spatial information output by the spatial recognition unit to form the at least two intersecting surfaces based on the at least two intersecting surfaces. And/or the at least one solid object is subjected to pattern correction, and then mapped by the projection lens unit, and a projection image matching the at least two intersecting surfaces and/or the at least one solid object is formed in the projection display area.
从图7中可以看出,该投影系统中,空间识别单元包括监控模块、以及图像分析识别模块;监控模块对投影空间进行图像信号搜集;图像分析识别模块对监控模块的监控图像信号进行分析处理,以识别投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构。As can be seen from FIG. 7, in the projection system, the space recognition unit includes a monitoring module and an image analysis and recognition module; the monitoring module performs image signal collection on the projection space; and the image analysis and recognition module analyzes and processes the monitoring image signal of the monitoring module. To identify whether there are at least two intersecting faces and/or at least one solid object in the projection space, and perform three-dimensional spatial information reconstruction.
具体实施时,监控模块可以是基于可见光的监控模块或基于红外光的监控模块;基于可见光的监控模块可以是单监控镜头或双监控镜头;基于红外光的监控模块可以是单监控镜头、双监控镜头、TOF传感器、或结构光传感器。In a specific implementation, the monitoring module may be a visible light based monitoring module or an infrared light based monitoring module; the visible light based monitoring module may be a single monitoring lens or a dual monitoring lens; the infrared light based monitoring module may be a single monitoring lens, dual monitoring Lens, TOF sensor, or structured light sensor.
从图7中可以看出,该投影系统中还包括信息处理单元、以及可基于投影图像进行互动操控的互动操控单元,互动操控单元与信息处理单元配合工作以实现基于所述投影图像的互动操控。其中,互动操控单元可以与空间识别单元共用其监控模块的部分或全部功能。例如互动操控单元与空间识别单元两者共享监控模块中的监控镜头,当监控镜头为双镜头时,互动操控单元至少共享其中一个监控镜头。As can be seen from FIG. 7, the projection system further includes an information processing unit and an interactive manipulation unit that can perform interactive manipulation based on the projection image, and the interactive manipulation unit cooperates with the information processing unit to implement an interactive manipulation based on the projected image. . Wherein, the interactive control unit can share some or all of the functions of its monitoring module with the space recognition unit. For example, the interactive control unit and the space recognition unit share the monitoring lens in the monitoring module. When the monitoring lens is a dual lens, the interactive control unit shares at least one of the monitoring lenses.
本发明的实施例一如图1A、图1B、图1C所示,这是一种游戏应用实例,投影机100位于上端,投影镜头单元102可将预设好的投影应用画面投射到该投影显示区域118内。在投影显示区域118内有一个长方体108,从图中所示的视角,人眼可以看到长方体108的上表面(A面)110、右侧表面(C面)114、 以及前表面(B面)112。在投影显示区域118内长方体108之外的其他区域为常规平面(D面)116。As shown in FIG. 1A, FIG. 1B and FIG. 1C, this is a game application example. The projector 100 is located at the upper end, and the projection lens unit 102 can project a preset projection application screen to the projection display. Within area 118. There is a rectangular parallelepiped 108 in the projection display area 118. From the perspective shown in the figure, the human eye can see the upper surface (A side) 110, the right side surface (C surface) 114 of the rectangular parallelepiped 108, And a front surface (B side) 112. The other area than the rectangular parallelepiped 108 in the projection display area 118 is a normal plane (D plane) 116.
投影机100中设有用于存储预设投影应用画面的画面存储单元;投影应用画面经投影机100中的图像处理单元处理后,形成基于长方体108进行图形校正的画面,再由投影镜头单元102进行投射,并在投影显示区域118内形成与长方体108匹配的具有预期增强现实效果的投影图像。The projector 100 is provided with a picture storage unit for storing a preset projection application screen; after the projection application screen is processed by the image processing unit in the projector 100, a screen for performing pattern correction based on the rectangular parallelepiped 108 is formed, and then the projection lens unit 102 performs Projection is performed and a projected image having the desired augmented reality effect matched to the cuboid 108 is formed within the projection display area 118.
如图1B所示,在长方形体108上投射出按钮图形、墙体图形与阶梯图形。具体是在A面110投射出第一按钮122,在B面112投射出墙体图形,在D面与C面之间投射出阶梯图形120。As shown in FIG. 1B, a button graphic, a wall graphic, and a staircase graphic are projected on the rectangular body 108. Specifically, the first button 122 is projected on the A surface 110, the wall pattern is projected on the B surface 112, and the staircase pattern 120 is projected between the D surface and the C surface.
关于预期增强现实效果,如图1B所示,长方体108经过投射后,B面112改变为虚拟的墙面,A面110上生成虚拟的按钮。另外一种预期增强现实效果的画面,以图1B中的阶梯图形120为例,其中需要在D面与C面之间投射出阶梯图形120,也就是说,这里的阶梯图形120是一个整体图像,它同时映射于两个相交面,即C面与D面上,阶梯图形120在C面与D面上分别显示一部分画面。Regarding the expected augmented reality effect, as shown in FIG. 1B, after the rectangular parallelepiped 108 is projected, the B-face 112 is changed to a virtual wall surface, and a virtual button is generated on the A-side 110. Another kind of picture that is expected to augment the reality effect is taken as an example of the step pattern 120 in FIG. 1B, in which a step pattern 120 needs to be projected between the D side and the C side, that is, the step pattern 120 is an overall image. It is simultaneously mapped to two intersecting faces, that is, the C face and the D face, and the staircase pattern 120 displays a part of the screen on the C face and the D face, respectively.
具体处理时,需根据长方体108放置的位置和角度,匹配出相应的楼梯梯级、需要拉伸的角度、需要产生的阴影等要素,所以必须预先或者实时通过对图形库的图形进行拉伸(或者局部拉伸)和图形校正,形成一个增强现实效果的阶梯,并在阶梯旁边生成第二按钮124。如果投影区域中没有此长方形物体,则投影图像就无需进行上述拉伸或校正,投影画面会完全不同。In the specific processing, according to the position and angle of the rectangular parallelepiped 108, the corresponding stair steps, the angles to be stretched, the shadows to be generated, and the like need to be matched, so the graphics of the graphics library must be stretched in advance or in real time (or Partial stretching) and graphic correction form a step that augments the reality effect and generates a second button 124 next to the step. If there is no such rectangular object in the projection area, the projected image does not need to be stretched or corrected as described above, and the projected image will be completely different.
具体实施时,可以将已经进行图形校正的预设投影应用画面,投射到处于指定地点的长方体108。也可以改变长方体108的位置,然后由监控镜头104、105侦测其位置,再选取相应的投影应用画面,经所影图像处理单元处理后,形成基于长方体108进行图形校正的画面,再由投影镜头单元进行投射。In a specific implementation, the preset projection application screen that has been subjected to the graphics correction may be projected to the rectangular parallelepiped 108 at the designated location. The position of the rectangular parallelepiped 108 can also be changed, and then the position of the rectangular parallelepiped 108 is detected, and then the corresponding projection application screen is selected, and after being processed by the image processing unit, a graphic correction based on the rectangular parallelepiped 108 is formed, and then projected. The lens unit is projected.
本发明中,还包括信息处理单元、以及可基于投影图像进行互动操控的互动操控单元,互动操控单元与信息处理单元配合工作以实现基于投影图像的互动操控。本实施例中使用直接触控式互动操控单元,具体是双监控镜头模式,在投影镜头单元102的两侧设有两个监控镜头104、106,在投影机100内还 集成了红外光源,其发出的红外光能够覆盖整个投影显示区域118。红外光源也可装于投影镜头外部,直接发射红外光并覆盖整个投影显示区域118。当人手指进入投影场景内去触碰投影图像时,两个红外监控镜头将抓拍到的图像同时传递到互动算法模组,计算出手指的空间位置,从而实现立体图像触控操作。其中,两个红外监控镜头需先定标和校正,同时获取图像视差图,通过图像跟踪、图像分割、图像识别实现空间重构,再通过算法计算出手指的三维空间位置;同时计算出投影画面的三维高低起伏场影。The invention further includes an information processing unit and an interactive manipulation unit that can perform interactive manipulation based on the projection image, and the interactive manipulation unit cooperates with the information processing unit to realize interactive manipulation based on the projection image. In this embodiment, a direct touch type interactive control unit, specifically a dual monitor lens mode, is provided. Two monitoring lenses 104 and 106 are disposed on both sides of the projection lens unit 102, and the projector 100 is also provided in the projector 100. An infrared light source is integrated that emits infrared light that covers the entire projected display area 118. The infrared light source can also be mounted outside the projection lens to directly emit infrared light and cover the entire projection display area 118. When a human finger enters the projection scene to touch the projected image, the two infrared monitoring lenses simultaneously transmit the captured image to the interactive algorithm module to calculate the spatial position of the finger, thereby realizing the stereo image touch operation. Among them, the two infrared monitoring lenses need to be scaled and corrected first, and the image disparity map is acquired at the same time. The spatial reconstruction is realized by image tracking, image segmentation and image recognition, and then the three-dimensional spatial position of the finger is calculated by the algorithm; The three-dimensional high and low undulating field shadows.
本实施例中,当人手进入投影场景内,两个监控镜头104、106可以监控到手指的空间位置信息,进而可通过监控手指的运动轨迹和空间坐标信息来判断当前所执行的触控操作,例如在空间不同的高度实现点击、滑动等。如图1B所示,该游戏中,手指可以点击游戏小人的头顶激活或暂停游戏小人,手指滑动方向为游戏小人运动方向;当游戏小人沿着虚拟阶梯即将登上A面110时,如果手指点击第一按钮122,可将A面100的映射图形换成一游泳池图形,小人进入虚拟游泳池里会欢快地游起泳来,如图1C所示。如果手指点击第二按钮124,则A面的游泳池会消失,换成原先的图形,此时可用手指控制小人离开去其它地方。In this embodiment, when the human hand enters the projection scene, the two monitoring lenses 104 and 106 can monitor the spatial position information of the finger, and can further determine the currently performed touch operation by monitoring the motion track of the finger and the spatial coordinate information. For example, clicking, sliding, etc. are realized at different heights of the space. As shown in FIG. 1B, in the game, the finger can click on the top of the game villain to activate or suspend the game villain, and the finger sliding direction is the direction of the game villain; when the game villain is about to board the A side 110 along the virtual ladder If the finger clicks on the first button 122, the map graphic of the A side 100 can be replaced with a swimming pool graphic, and the villain enters the virtual swimming pool and swims happily, as shown in FIG. 1C. If the finger clicks the second button 124, the swimming pool on the A side will disappear and be replaced with the original graphic, and the finger can be used to control the villain to leave to other places.
其中,针对游戏小人也有预期增强现实效果,在由D面沿着虚拟阶梯登上A面的过程中,游戏小人是一个整体图像,它先映射于D面上、再映射于虚拟的阶梯上、再映射于A面上,也即连续映射于各个面上并依次在各个面上显示连续动作的一部分。Among them, the game villain also has the expected augmented reality effect. In the process of boarding the A side along the virtual ladder from the D plane, the game villain is an overall image, which is first mapped on the D surface and then mapped to the virtual ladder. The upper and the second are mapped on the A surface, that is, continuously mapped to each surface, and a part of the continuous motion is sequentially displayed on each surface.
本实施例中的直接触控式互动操控单元,还可以是TOF(Time of Flight,即飞行时间)传感器模式、或者结构光传感器模式。The direct touch interactive control unit in this embodiment may also be a TOF (Time of Flight) sensor mode or a structured light sensor mode.
TOF传感器的基本原理是用传感器接收从物体返回的光,通过探测光脉冲的飞行(往返)时间来得到目标物距离。TOF传感器与普通机器视觉成像过程也有类似之处,都是由光源、光学部件、传感器、控制电路以及处理电路等几部单元组成,而TOF传感器是通过入、反射光探测来获取的目标距离获取。通过TOF传感器把投影空间的立体信息重构出来,即可计算手指的在三维空间的位置,从而判断空间的触控。 The basic principle of the TOF sensor is to receive the light returned from the object by the sensor, and to obtain the target distance by detecting the flight (round trip) time of the light pulse. The TOF sensor is similar to the general machine vision imaging process. It consists of several units, such as a light source, an optical component, a sensor, a control circuit, and a processing circuit. The TOF sensor is obtained by the target distance acquired by the in-and-out-reflection detection. . By reconstructing the stereoscopic information of the projection space by the TOF sensor, the position of the finger in the three-dimensional space can be calculated, thereby determining the touch of the space.
结构光传感器的工作原理是利用连续光(近红外线)对测量空间进行编码,经感应器读取编码的光线,交由晶片运算进行解码后,产生一张具有深度的图像。但与传统的结构光方法不同的是,其光源打出去的并不是一副周期性变化的二维的图像编码,而是一个具有三维纵深的“体编码”。这种光源叫做激光散斑(Laser Speckle),是当激光照射到粗糙物体或穿透毛玻璃后形成的随机衍射斑点。这些散斑具有高度的随机性,而且会随着距离的不同变换图案,空间中任何两处的散斑都会是不同的图案,等于是将整个空间加上了标记,所以任何物体进入该空间以及移动时,都可确切记录物体的位置。通过结构光传感器把投影空间的立体信息重构出来,即可计算手指的在三维空间的位置,从而判断空间的触控。The structured light sensor works by using continuous light (near-infrared) to encode the measurement space, reading the encoded light through the sensor, and decoding it by the wafer operation to produce a depth image. However, unlike the traditional structured light method, the light source is not a periodic two-dimensional image coding, but a "body coding" with three-dimensional depth. This kind of light source is called Laser Speckle, which is a random diffraction spot formed when the laser is irradiated to a rough object or penetrates the frosted glass. These speckles are highly random and will change pattern with distance. Any two spots in the space will have different patterns, which is equivalent to marking the entire space, so any object enters the space and When moving, the position of the object can be recorded exactly. By reconstructing the stereoscopic information of the projection space by the structured light sensor, the position of the finger in the three-dimensional space can be calculated, thereby determining the touch of the space.
在上述实施例中,还可增设空间识别单元,用于对投影显示区域所在投影空间进行三维空间信息采集,以识别所述投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构;也就是说,即使随意放入图1A所示的长方体,也可自动识别并完成后续步骤。此时,图像处理单元基于三维空间信息对所选择的投影应用画面进行处理,以形成基于该长方体的各个面进行图形校正的画面,再由投影镜头单元进行映射,并在投影显示区域内形成与该长方体匹配的具有增强现实效果的投影图像。In the above embodiment, a space recognition unit may be further provided for performing three-dimensional spatial information collection on the projection space where the projection display area is located, to identify whether there are at least two intersecting surfaces and/or at least one solid object in the projection space. And reconstructing the three-dimensional spatial information; that is, even if the rectangular parallelepiped shown in FIG. 1A is randomly placed, the subsequent steps can be automatically identified and completed. At this time, the image processing unit processes the selected projection application screen based on the three-dimensional spatial information to form a screen for performing pattern correction based on each surface of the rectangular parallelepiped, and then is mapped by the projection lens unit, and is formed in the projection display area. The cuboid matches a projected image with an augmented reality effect.
其中,空间识别单元包括监控模块、以及图像分析识别模块;监控模块对投影空间进行图像信号搜集;图像分析识别模块对监控模块的监控图像信号进行分析处理,以识别所述投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构。具体来说,实施例一中的互动操控单元,还可同时用作空间识别单元的一部分。其中,监控模块可以是基于可见光的监控模块或基于红外光的监控模块;基于可见光的监控模块可以是单监控镜头或双监控镜头;基于红外光的监控模块可以是单监控镜头、双监控镜头、TOF传感器、或结构光传感器。The space recognition unit includes a monitoring module and an image analysis and recognition module; the monitoring module performs image signal collection on the projection space; and the image analysis and recognition module analyzes and processes the monitoring image signal of the monitoring module to identify whether at least the projection space has at least Two intersecting faces and/or at least one solid object and performing three-dimensional spatial information reconstruction. Specifically, the interactive control unit in the first embodiment can also be used as part of the spatial recognition unit. The monitoring module may be a visible light based monitoring module or an infrared light based monitoring module; the visible light based monitoring module may be a single monitoring lens or a dual monitoring lens; the infrared light based monitoring module may be a single monitoring lens, a dual monitoring lens, TOF sensor, or structured light sensor.
本发明的实施例二如图2A、图2B、图2C所示,其中,投影机200位于上端,投影镜头单元202可将预设好的投影应用画面投射到该投影显示区域218 内。在投影显示区域118内有一个锥形物体208,从图中所示的视角,人眼可以看到锥形物体的A面110、B面112。在投影显示区域118内锥形物体之外的其他区域为常规平面(D面)216。The second embodiment of the present invention is shown in FIG. 2A, FIG. 2B and FIG. 2C, wherein the projector 200 is located at the upper end, and the projection lens unit 202 can project a preset projection application screen to the projection display area 218. Inside. There is a conical object 208 in the projection display area 118. From the perspective shown in the figure, the human eye can see the A side 110 and the B side 112 of the conical object. Other areas than the tapered object in the projected display area 118 are conventional planes (D-face) 216.
本实施例中,可将该锥形物体208映射成一金字塔,先从图形库中匹配出渲染的纹理,再根据此锥形物体放置的位置、角度、投影机的高度等计算出合理的拉伸(或者局部拉伸)和图形校正,生成一座增强现实效果的立体金字塔,在金字塔的A面与B面下面生成一虚拟基座240。In this embodiment, the tapered object 208 can be mapped into a pyramid, and the rendered texture is first matched from the graphics library, and then a reasonable stretch is calculated according to the position, the angle, the height of the projector, and the like of the tapered object. (or partial stretching) and graphic correction to generate a solid pyramid of augmented reality effects, creating a virtual pedestal 240 below the A and B faces of the pyramid.
本实施例中,采用基于无线通信的遥控器进行互动操控,具体是一个外部遥控器230。该遥控器可以是常规的游戏手柄,带有前、后、左、右等移动按键和各种动作按键。遥控器的指令可通过2.4G、蓝牙、或者WIFI通信方式传输给投影机。图中所示遥控器230为双人模式手柄,玩家可以选择单人模式或双人模式,玩家以此操控投影画面里游戏小人的动作,游戏小人可以在金字塔周围建立防护圈242,也可以攀爬金字塔,还可以在金字塔上涂鸦等。In this embodiment, an interactive control based on a wireless communication remote control is used, specifically an external remote controller 230. The remote controller can be a conventional game controller with front, rear, left, right and other moving buttons and various action buttons. The commands of the remote control can be transmitted to the projector via 2.4G, Bluetooth, or WIFI communication. The remote control 230 shown in the figure is a double mode handle, and the player can select the single mode or the double mode, and the player can control the movement of the game villain in the projection screen, and the game villain can establish the protection circle 242 around the pyramid, or climb Climb the pyramid, you can also graffiti on the pyramid.
本发明的实施例三如图3A、图3B所示,其中,投影机300位于上端,投影镜头单元302可将预设好的投影应用画面投射到该投影显示区域318内。在投影显示区域318内有一个正方体308,从图中所示的视角,人眼可以看到正方体308的上表面(A面)310、右侧表面(C面)314、以及前表面(B面)312。在投影显示区域318内正方体308之外的其他区域为常规平面(D面)316。The third embodiment of the present invention is shown in FIG. 3A and FIG. 3B, wherein the projector 300 is located at the upper end, and the projection lens unit 302 can project a preset projection application screen into the projection display area 318. There is a cube 308 in the projection display area 318. From the perspective shown in the figure, the human eye can see the upper surface (A side) 310, the right side surface (C surface) 314, and the front surface (B side) of the square body 308. ) 312. Other areas than the cube 308 in the projection display area 318 are the regular plane (D plane) 316.
如图3B所示,这是一种古代战争游戏实例,在投影显示区域318内,正方体308的A面被映射成城楼头,C面314被映射成城门372,城门的两边是增强现实效果的城墙370。游戏者可以自由选择攻城方376或者守城方374。As shown in FIG. 3B, this is an example of an ancient war game. In the projection display area 318, the A side of the cube 308 is mapped into a city head, and the C side 314 is mapped into a city gate 372. The two sides of the gate are augmented reality. The effect of the city wall 370. Players are free to choose Siege Party 376 or Shoucheng Party 374.
当选择守城方时,游戏者可以利用各种资源进行守城,例如弓箭、滚木、礌石、油锅等;当攻城方实力降低时,守城方可以打开城门冲杀出去;为了增加游戏的刺激性,还可以在城墙周围增强现实成护城河、城门吊桥等防护设置。此游戏依然需要对投影区域内放置的物体进行预先图形校正,以便映射之图像能够生成游戏中需要的背景与情节。 When choosing the defending city, the player can use various resources to defend the city, such as bows and arrows, rolling wood, meteorites, oil pans, etc.; when the siege strength is reduced, the defending city can open the gate to rush out; The irritating nature of the game can also enhance the reality around the city walls into protective guards such as moats and city gate suspension bridges. This game still requires pre-graphic correction of objects placed in the projection area so that the mapped image can generate the background and plot needed in the game.
本实施例中,使用基于红外光的外部遥控器360进行触控操作。遥控器发射特定波长之红外光到投影画面,投影机监控镜头予以识别,同时在投影显示区域的相对位置产生指示图标362,并按照遥控器指令,进行前后左右移动和其他相应互动操作。使得在投影机环境下本游戏操作具有类似触控平板一样的体验,而投影增强现实效果的画面又远远超过一般平板类产品的用户体验。关于红外遥控的工作原理,可参见申请公布号为CN104834394A、CN104020897A的中国发明专利申请中的内容。前述两个专利申请中的互动操控方式,均可用于本实施例中。In this embodiment, the infrared light-based external remote controller 360 is used for the touch operation. The remote controller emits infrared light of a specific wavelength to the projection screen, the projector monitors the lens to recognize, and generates an indication icon 362 at a relative position of the projection display area, and performs front, rear, left and right movement and other corresponding interactive operations according to the remote controller instruction. This makes the game operation similar to the touch panel in the projector environment, and the projection augmented reality effect far exceeds the user experience of the general tablet products. For the working principle of the infrared remote control, refer to the contents of the Chinese invention patent application filed as CN104834394A and CN104020897A. The interactive manipulation methods in the foregoing two patent applications can be used in this embodiment.
在申请公布号为CN104834394A的中国发明专利申请中,公开了一种互动显示系统,其包括红外光源和监控装置;监控装置包括互动模组和互动控制单元;互动模组包括监控镜头、分光元件、可见光监控元件和红外光监控元件;监控镜头捕捉到的可见光投影画面成像到可见光监控元件上形成第一成像画面;监控镜头捕捉到的红外光斑成像到红外光监控元件上形成第二成像画面;互动控制单元与可见光监控元件、红外光监控元件连接,并向显示单元输出红外光斑的位置信息,显示单元根据红外光斑的位置信息显示相应的可见图标,则可见图标可实现点击、拖动、放大缩小等操作,以实现人机交互的功能。实施例三中,监控镜头304除用于互动操控,也可以作为空间识别单元,利用可见光探测长方体312的位置并输出空间信息用于实时图形校正等目的。In the Chinese invention patent application with the publication number CN104834394A, an interactive display system is disclosed, which comprises an infrared light source and a monitoring device; the monitoring device comprises an interactive module and an interactive control unit; the interactive module comprises a monitoring lens, a beam splitting component, a visible light monitoring component and an infrared light monitoring component; the visible light projection image captured by the monitoring lens is imaged onto the visible light monitoring component to form a first imaging image; the infrared light spot captured by the monitoring lens is imaged onto the infrared light monitoring component to form a second imaging image; The control unit is connected with the visible light monitoring component and the infrared light monitoring component, and outputs position information of the infrared spot to the display unit, and the display unit displays the corresponding visible icon according to the position information of the infrared spot, and the visible icon can realize click, drag, zoom in and out Wait for the operation to achieve the function of human-computer interaction. In the third embodiment, the monitoring lens 304 can be used as a spatial recognition unit, and can detect the position of the rectangular parallelepiped 312 by using visible light and output spatial information for real-time graphic correction and the like.
在申请公布号为CN104020897A的中国发明专利申请中,公开了一种互动显示系统,包括用于进行信息处理的主控单元、用于根据接收到的显示信息产生显示画面的显示单元、一个或多个遥控单元、以及监控模组。主控单元与显示单元和监控模组通信连接,每一个遥控单元包括一个红外发射模组,红外发射模组用于发射两种不同波长的红外光至所述显示画面,以产生红外光斑。监控模组包括用于捕获显示画面内的红外光斑的监控镜头、第一分光元件、以及分别供不同波长的第一红外光和第二红外光入射的第一红外监控元件和第二红外监控元件。实施本发明的技术方案,可以在一个显示画面内进行多个点的互动操作,且每个点都能实现可见图标的点击、拖动、放大缩小等操作,提升了用户人机交互的体验。 In the Chinese invention patent application with the publication number CN104020897A, an interactive display system is disclosed, comprising a main control unit for performing information processing, a display unit for generating a display screen according to the received display information, one or more Remote control unit and monitoring module. The main control unit is in communication with the display unit and the monitoring module. Each remote control unit includes an infrared emitting module, and the infrared transmitting module is configured to emit infrared light of two different wavelengths to the display screen to generate an infrared spot. The monitoring module includes a monitoring lens for capturing an infrared spot in the display screen, a first beam splitting element, and a first infrared monitoring component and a second infrared monitoring component for respectively inputting the first infrared light and the second infrared light of different wavelengths . By implementing the technical solution of the present invention, multiple points of interaction operations can be performed in one display screen, and each point can realize operations such as clicking, dragging, zooming in and out of visible icons, thereby improving the user experience of human-computer interaction.
如图4A所示的实施例中,是方形物体410置于投影桌面上,在图示视觉范围,可以看到四组两两相交平面,即正前面412与桌面408相交、右侧面413与桌面408相交、上表面411与正前面412相交、以及上表面411与正前面412相交。最佳视觉相交平面是指夹角小于180°的相交平面,在图4A中,角度414与角度415小于180°,属于最佳视觉相交平面。在这类相交平面之间,可以进行图形校正处理或3D图像映射;而角度416与角度417大于180°(达到270°),则不属于最佳视觉相交平面,不能进行图形校正处理或3D图像映射。In the embodiment shown in FIG. 4A, a square object 410 is placed on the projection table. In the illustrated visual range, four sets of two-two intersecting planes can be seen, that is, the front front 412 intersects the desktop 408, and the right side 413 is The table top 408 intersects, the upper surface 411 intersects the front side 412, and the upper surface 411 intersects the front side 412. The best visual intersecting plane refers to the intersecting plane with an included angle of less than 180°. In Fig. 4A, the angle 414 and the angle 415 are less than 180°, which are the best visual intersecting planes. Between such intersecting planes, pattern correction processing or 3D image mapping can be performed; and angle 416 and angle 417 are greater than 180° (up to 270°), which is not the best visual intersection plane, and cannot be subjected to pattern correction processing or 3D image. Mapping.
图4B为圆柱形物体420置于投影桌面上,产生二组两两相交曲面,即柱面422(人眼可以看到的前半部圆柱面)与桌面408相交、以及上表面421与柱面422相交,其中角度425小于180°,属于最佳视觉相交曲面;而角度426大于180°,不属于最佳视觉相交曲面。4B shows a cylindrical object 420 placed on a projection table, producing two sets of two-two intersecting curved surfaces, namely a cylindrical surface 422 (the front half cylindrical surface visible to the human eye) intersecting the table top 408, and an upper surface 421 and a cylindrical surface 422. Intersecting, where angle 425 is less than 180°, is the best visual intersecting surface; and angle 426 is greater than 180°, which is not the best visual intersecting surface.
图4C为半球形物体430置于投影桌面上,只有一组两两相交面,即球面431与桌面408相交,图中角度432小于180°,属于最佳视觉相交曲面,可以进行图形校正处理或3D图像映射。4C shows that the hemispherical object 430 is placed on the projection table, and only one set of two intersecting faces, that is, the spherical surface 431 intersects the table top 408, and the angle 432 is less than 180° in the figure, which belongs to the best visual intersecting curved surface, and can be subjected to graphic correction processing or 3D image mapping.
图5A为墙面投影实施例,相交的墙面是日常生活中常见的空间环境。投影机将图像投射到相交的两个墙面上,在左右墙面506和墙面508形成投影图像510。本实施例中,可计算出两个墙面506、508相对于投影机的空间三维位置信息,图像处理单元对墙面506、508上的图像信息进行图形校正,在两个前面个投射出半扇窗户形成一个半开的窗户512,如图5A所示;或在两个墙面夹角间可以投射一些增强现实图像如立体图像,比如5B的虚拟三角块图像516。Fig. 5A is a wall projection embodiment, and the intersecting wall surface is a common space environment in daily life. The projector projects an image onto the intersecting walls, forming a projected image 510 at the left and right wall 506 and wall 508. In this embodiment, spatial three-dimensional position information of the two wall surfaces 506 and 508 with respect to the projector can be calculated, and the image processing unit performs image correction on the image information on the wall surfaces 506 and 508, and projects half of the two front images. The fan window forms a half-open window 512, as shown in Figure 5A; or an augmented reality image such as a stereoscopic image, such as a virtual triangle image 516 of 5B, can be projected between the two wall corners.
图5C为墙角投影实施例,投影机将图像投射到房间的一个角落,使图像成像在顶面504及左右墙面506、508上。没有做图形校正处理的投影面在三个墙面的图像如图中虚线表示,图像看起来是一个不规则多边形。本实施例中,可计算出三个面的空间三维位置信息,图像处理单元对三个面的图像信息进行 图形校正,在墙角处(即A、B、C三面相交处)进行增强现实投影映射,形成虚拟的鸟巢图像518。Figure 5C is a corner projection embodiment in which the projector projects an image onto a corner of the room to image the image on top surface 504 and left and right wall surfaces 506, 508. The projection surface without the graphic correction processing is represented by a broken line in the image of the three walls, and the image appears to be an irregular polygon. In this embodiment, spatial three-dimensional position information of three faces can be calculated, and the image processing unit performs image information of three faces. Graphic correction, augmented reality projection mapping is performed at the corners (ie, intersections of the three sides A, B, and C) to form a virtual bird's nest image 518.
本发明的投影系统具有一个预设最佳观看角度,观看者需在最佳观看角度才能观看到最佳效果;在对预设的投影应用画面进行处理时,会基于最佳观看角度、各个面、以及它们之间的相互关系进行图形校正,再由投影镜头单元进行映射。The projection system of the invention has a preset optimal viewing angle, and the viewer needs to view the best effect at the optimal viewing angle; when processing the preset projection application screen, the optimal viewing angle and each surface are based on And the interrelationship between them is graphically corrected and then mapped by the projection lens unit.
根据需要,还可调整观看者当前视角,调整方式包括手动或自动方式。在自动调整观看者当前视角时,还包括识别观看者当前视角的步骤,具体可使用传感器定位观看者位置,包括使观看者佩戴带有位置传感器的眼镜,从而定位观看者眼睛的大致空间位置,在对所述预设的投影应用画面根据观看者眼睛空间位置进行处理以形成图形校正画面。The current viewing angle of the viewer can also be adjusted as needed, either manually or automatically. In the automatic adjustment of the current viewing angle of the viewer, the method further includes the step of identifying the viewer's current viewing angle, and specifically using the sensor to position the viewer, including causing the viewer to wear the glasses with the position sensor to position the approximate spatial position of the viewer's eyes. The projection application screen for the preset is processed according to the viewer's eye space position to form a graphic correction screen.
如图6A所示,桌面602上有一个长方体604,投影机600预设的投影图像为一长方形606,当观看者在预设最佳观看角度(桌子的宽边)观看时,经过图像处理单元对预设投影图像进行图形校正处理后,观看者看到的投影图像依然是长方形。As shown in FIG. 6A, the desktop 602 has a rectangular parallelepiped 604. The projected image of the projector 600 is a rectangle 606. When the viewer views the preset optimal viewing angle (the wide side of the table), the image processing unit passes through the image processing unit. After the graphics correction processing is performed on the preset projection image, the projection image seen by the viewer is still rectangular.
如果观看者选择在桌子的短边观看,这时观看者看到的图像不再是长方形,而是如图6B所示的图形。这也是本发明中需要强调最佳视觉的原因。If the viewer chooses to view on the short side of the table, then the image seen by the viewer is no longer a rectangle, but rather a graphic as shown in Figure 6B. This is also the reason why the best vision needs to be emphasized in the present invention.
当手动或者自动调整观赏者视角时,图像如果依照观赏者转移到桌子短边进行校正,如图6C所示,则观赏者看到的依然是长方形。 When the viewer's perspective is manually or automatically adjusted, if the image is corrected according to the viewer's transition to the short side of the table, as shown in FIG. 6C, the viewer still sees a rectangle.

Claims (21)

  1. 一种具有增强现实效果的投影方法,其特征在于,其中,在投影显示区域内有两个或多个可供投影图像落于其上的面,且至少两个面是相对于观看者的相交角度小于180度的相交面;该方法包括以下步骤:A projection method with an augmented reality effect, wherein there are two or more faces on the projection display area on which the projected image falls, and at least two faces are intersected with respect to the viewer An intersecting surface having an angle of less than 180 degrees; the method includes the following steps:
    S1、预设与所述各个面对应的投影应用画面;S1, preset a projection application screen corresponding to each of the faces;
    S2、对所述预设的投影应用画面进行处理,以形成基于所述各个面进行图形校正的画面,再由投影镜头单元进行映射,并在所述投影显示区域内形成与所述各个面匹配的具有预期增强现实效果的投影图像。S2, processing the preset projection application screen to form a screen for performing graphics correction based on the respective surfaces, and then mapping by the projection lens unit, and forming a matching with the respective surfaces in the projection display area A projected image with the expected augmented reality effect.
  2. 根据权利要求1所述的具有增强现实效果的投影方法,其特征在于,所述投影应用画面中包含至少一个需映射于至少两个相交面上的整体图像;所述整体图像在各个面上同时分别显示一部分画面,或者连续映射于各个面上并依次在各个面上显示连续动作的一部分。The projection method with augmented reality effect according to claim 1, wherein the projection application screen includes at least one overall image to be mapped on at least two intersecting surfaces; the overall image is simultaneously on each surface A part of the screen is displayed separately, or continuously mapped to each surface, and a part of the continuous motion is sequentially displayed on each surface.
  3. 根据权利要求2所述的具有增强现实效果的投影方法,其特征在于,还包括预设最佳观看角度的步骤,观看者需在所述最佳观看角度才能观看到最佳效果;在所述步骤S2中对所述预设的投影应用画面进行处理时,基于所述最佳观看角度、所述各个面、以及它们之间的相互关系进行图形校正,再由投影镜头单元进行映射。A projection method with an augmented reality effect according to claim 2, further comprising the step of preset an optimal viewing angle at which the viewer needs to view the best effect; When the preset projection application screen is processed in step S2, the image is corrected based on the optimal viewing angle, the respective faces, and the relationship between them, and then mapped by the projection lens unit.
  4. 根据权利要求3所述的具有增强现实效果的投影方法,其特征在于,还包括空间识别步骤,对所述投影显示区域所在投影空间进行三维空间信息采集,以识别所述投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构;The projection method with augmented reality effect according to claim 3, further comprising a spatial recognition step of performing three-dimensional spatial information collection on a projection space in which the projection display area is located, to identify whether at least the projection space has at least Two intersecting faces and/or at least one solid object and performing three-dimensional spatial information reconstruction;
    在所述步骤S2中对所述预设的投影应用画面进行处理时,基于所述最佳观看角度、所述三维空间信息、以及它们之间的相互关系进行图形校正,再由投影镜头单元进行映射,并在所述投影显示区域内形成与所述至少两个相交面 和/或至少一个立体物体匹配的具有增强现实效果的投影图像。When the preset projection application screen is processed in the step S2, the image is corrected based on the optimal viewing angle, the three-dimensional spatial information, and the relationship between them, and then performed by the projection lens unit. Mapping and forming at least two intersecting faces in the projected display area A projected image with an augmented reality effect matched to at least one solid object.
  5. 根据权利要求4所述的具有增强现实效果的投影方法,其特征在于,所述空间识别步骤,还包括对投影光线分别与两个相交面和/或至少一个立体物体角度的入射角度的识别,并进行三维空间信息重构;The projection method with augmented reality effect according to claim 4, wherein the spatial recognition step further comprises: recognizing an incident angle of the projected ray with two intersecting faces and/or at least one solid object angle, And performing three-dimensional spatial information reconstruction;
    在所述步骤S2中对所述预设的投影应用画面进行处理时,基于所述投影光线分别与两个相交面和/或至少一个立体物体角度的入射角度的信息进行图形校正,再由投影镜头单元进行映射,并在所述投影显示区域内形成与所述至少两个相交面和/或至少一个立体物体匹配的具有增强现实效果的投影图像。When the preset projection application screen is processed in the step S2, the image is corrected based on the information of the incident angles of the two intersecting surfaces and/or the at least one solid object angle, and then projected by the projection light. The lens unit performs mapping and forms a projected image having an augmented reality effect matching the at least two intersecting faces and/or the at least one solid object within the projected display area.
  6. 根据权利要求5所述的具有增强现实效果的投影方法,其特征在于,在所述步骤S1中,以自动或手动方式从预设的投影应用画面中选择与所述至少两个相交面和/或至少一个立体物体匹配的投影应用画面。The projection method with augmented reality effect according to claim 5, wherein in the step S1, the at least two intersecting faces are selected from the preset projection application screens in an automatic or manual manner and/or Or a projection application screen that matches at least one solid object.
  7. 根据权利要求1-6中任一项所述的具有增强现实效果的投影方法,其特征在于,所述各个面中包括可供投影图像映射于其上的一个基础面、一个或多个附加面;每个所述附加面是下列立体物体中的一个面:正方体、长方体、棱锥体、圆锥体、球体、圆柱体、或台体中一种或多种的组合。A projection method with an augmented reality effect according to any one of claims 1 to 6, wherein each of the faces includes a base face on which the projected image is mapped, and one or more additional faces Each of the additional faces is one of the following solid objects: a combination of one or more of a cube, a cuboid, a pyramid, a cone, a sphere, a cylinder, or a table.
  8. 根据权利要求3-6中任一项所述的具有增强现实效果的投影方法,其特征在于,还包括调整观看者当前视角的步骤,调整方式包括手动或自动方式,在所述步骤S2中对所述预设的投影应用画面进行处理以形成图形校正画面时,还基于所述观看者当前视角信息进行处理。The projection method with an augmented reality effect according to any one of claims 3-6, further comprising the step of adjusting a current viewing angle of the viewer, the adjustment manner comprising manual or automatic mode, in the step S2 When the preset projection application screen is processed to form a graphic correction screen, the processing is further performed based on the current viewing angle information of the viewer.
  9. 根据权利要求8所述的具有增强现实效果的投影方法,其特征在于,在自动调整观看者当前视角时,还包括识别观看者当前视角的步骤,在所述步骤S2中对所述预设的投影应用画面进行处理以形成图形校正画面时,还基于所述识别的观看者当前视角信息进行处理。 The projection method with augmented reality effect according to claim 8, wherein when the current viewing angle of the viewer is automatically adjusted, the method further includes the step of recognizing the current viewing angle of the viewer, and the preset in the step S2 When the projection application screen is processed to form a graphics correction screen, processing is also performed based on the identified viewer current perspective information.
  10. 根据权利要求8所述的具有增强现实效果的投影方法,其特征在于,在自动识别观看者当前视角的步骤,使用传感器定位观看者位置,包括使观看者佩戴带有位置传感器的眼镜,从而定位观看者眼睛的大致空间位置,在所述步骤S2中对所述预设的投影应用画面根据观看者眼睛空间位置进行处理以形成图形校正画面。The projection method with augmented reality effect according to claim 8, wherein in the step of automatically recognizing the current viewing angle of the viewer, using the sensor to position the viewer, including causing the viewer to wear the glasses with the position sensor, thereby positioning The approximate spatial position of the viewer's eyes is processed in the step S2 for the preset projection application screen according to the viewer's eye space position to form a graphic correction screen.
  11. 一种具有增强现实效果的投影系统,包括投影镜头单元、图像处理单元;其特征在于,A projection system with an augmented reality effect, comprising a projection lens unit and an image processing unit;
    所述投影镜头单元所形成的投影显示区域内有两个或多个可供投影图像映射于其上的面,其中至少两个面是相对于观看者的相交角度小于180度的相交面;The projection display area formed by the projection lens unit has two or more faces on which the projected image is mapped, wherein at least two faces are intersecting faces with an angle of intersection of less than 180 degrees with respect to the viewer;
    该系统中还包括用于存储与所述各个面对应的预设投影应用画面的画面存储单元;The system further includes a picture storage unit for storing preset projection application screens corresponding to the respective faces;
    所述图像处理单元对所述投影应用画面进行处理,以形成基于所述各个面进行图形校正的画面,再由所述投影镜头单元进行映射,并在所述投影显示区域内形成与所述各个面匹配的具有预期增强现实效果的投影图像。The image processing unit processes the projection application screen to form a screen for performing pattern correction based on the respective faces, and then mapping by the projection lens unit, and forming and each of the projection display regions Face-matched projected image with the desired augmented reality effect.
  12. 根据权利要求11所述的具有增强现实效果的投影系统,其特征在于,该系统中还包括:The projection system with augmented reality effect according to claim 11, wherein the system further comprises:
    空间识别单元,用于对所述投影显示区域所在投影空间进行三维空间信息采集,以识别所述投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构;a space recognition unit, configured to perform three-dimensional spatial information collection on a projection space where the projection display area is located, to identify whether there are at least two intersecting surfaces and/or at least one solid object in the projection space, and perform three-dimensional spatial information reconstruction ;
    其中,所述图像处理单元基于所述三维空间信息对所选择的所述投影应用画面进行处理,以形成基于所述至少两个相交面和/或至少一个立体物体进行图形校正的画面,再由所述投影镜头单元进行映射,并在所述投影显示区域内形成与所述至少两个相交面和/或至少一个立体物体匹配的具有增强现实效果的投影图像。 The image processing unit processes the selected projection application screen based on the three-dimensional spatial information to form a screen for performing graphics correction based on the at least two intersecting surfaces and/or at least one solid object, and then The projection lens unit performs mapping and forms a projected image having an augmented reality effect matching the at least two intersecting faces and/or at least one solid object in the projected display region.
  13. 根据权利要求12所述的具有增强现实效果的投影系统,其特征在于,所述空间识别单元包括监控模块、以及图像分析识别模块;所述监控模块对所述投影空间进行图像信号搜集;所述图像分析识别模块对所述监控模块的监控图像信号进行分析处理,以识别所述投影空间内是否有至少两个相交面和/或至少一个立体物体,并进行三维空间信息重构。The projection system with augmented reality effect according to claim 12, wherein the space recognition unit comprises a monitoring module and an image analysis and recognition module; the monitoring module performs image signal collection on the projection space; The image analysis and recognition module analyzes the monitoring image signal of the monitoring module to identify whether there are at least two intersecting surfaces and/or at least one solid object in the projection space, and performs three-dimensional spatial information reconstruction.
  14. 根据权利要求13所述的具有增强现实效果的投影系统,其特征在于,所述监控模块是基于可见光的监控模块或基于红外光的监控模块;基于可见光的监控模块包括单监控镜头或双监控镜头;基于红外光的监控模块包括单监控镜头、双监控镜头、TOF传感器、或结构光传感器。The projection system with augmented reality effect according to claim 13, wherein the monitoring module is a visible light based monitoring module or an infrared light based monitoring module; the visible light based monitoring module comprises a single monitoring lens or a dual monitoring lens. The infrared light-based monitoring module includes a single monitoring lens, a dual monitoring lens, a TOF sensor, or a structured light sensor.
  15. 根据权利要求11所述的具有增强现实效果的投影系统,其特征在于,还包括用于识别观看者当前视角的视角识别单元,所述图像处理单元对所述投影应用画面进行处理以形成图形校正画面时,还基于所述视角识别单元所得出的观看者当前视角进行处理。A projection system with an augmented reality effect according to claim 11, further comprising a view recognition unit for recognizing a current view of the viewer, said image processing unit processing said projected application picture to form a picture correction At the time of the screen, the processing is also performed based on the current viewing angle of the viewer obtained by the viewing angle recognition unit.
  16. 根据权利要求15所述的具有增强现实效果的投影系统,其特征在于,还包括供观看者佩戴带的带有位置传感器的眼镜,所述传感器用于定位观看者的位置,从而定位观看者眼睛空间位置,所述图像处理单元对所述投影应用画面进行处理以形成图形校正画面时,还基于观看者眼睛空间位置进行处理以形成图形校正画面。A projection system with an augmented reality effect according to claim 15, further comprising glasses with a position sensor for the viewer to wear the belt for positioning the position of the viewer to position the viewer's eyes The spatial position, when the image processing unit processes the projection application screen to form a graphics correction screen, further processes based on the viewer's eye space position to form a graphics correction screen.
  17. 根据权利要求11所述的具有增强现实效果的投影系统,其特征在于,还包括信息处理单元、以及可基于所述投影图像进行互动操控的互动操控单元,所述互动操控单元与所述信息处理单元配合工作以实现基于所述投影图像的互动操控。 A projection system with an augmented reality effect according to claim 11, further comprising an information processing unit, and an interactive manipulation unit that can perform interactive manipulation based on the projection image, the interactive manipulation unit and the information processing The unit cooperates to achieve an interactive manipulation based on the projected image.
  18. 根据权利要求17所述的具有增强现实效果的投影系统,其特征在于,所述互动操控单元包括基于无线通信的遥控器,所述遥控器包括手机、平板电脑、游戏手柄、或空中鼠标中一种或多种的组合;所述遥控器使用2.4G、蓝牙、或者WIFI通信方式传输互动操控信号。The projection system with augmented reality effect according to claim 17, wherein the interactive manipulation unit comprises a remote controller based on wireless communication, and the remote controller comprises a mobile phone, a tablet computer, a game controller, or an air mouse. One or more combinations; the remote control transmits an interactive control signal using 2.4G, Bluetooth, or WIFI communication.
  19. 根据权利要求17所述的具有增强现实效果的投影系统,其特征在于,所述互动操控单元包括基于红外光的外部遥控器及红外监控镜头;所述外部遥控器可在所述投影显示区域内形成红外光斑,所述红外监控镜头捕捉所述红外光斑,所述图像处理单元根据所述红外光斑的位置信息显示相应的可见图标,并通过所述外部遥控器控制所述可见图标以实现互动操控功能。The projection system with augmented reality effect according to claim 17, wherein the interactive manipulation unit comprises an infrared light-based external remote controller and an infrared monitoring lens; the external remote controller can be in the projection display area Forming an infrared spot, the infrared monitoring lens capturing the infrared spot, the image processing unit displaying a corresponding visible icon according to position information of the infrared spot, and controlling the visible icon by the external remote controller to implement interactive control Features.
  20. 根据权利要求17所述的具有增强现实效果的投影系统,其特征在于,所述互动操控单元包括直接触控式互动操控单元,并从双监控镜头模式、TOF传感器模式、或者结构光传感器模式选择其中的一种互动操控模式。The projection system with augmented reality effect according to claim 17, wherein the interactive manipulation unit comprises a direct touch interaction control unit and is selected from a dual monitor lens mode, a TOF sensor mode, or a structured light sensor mode. One of the interactive manipulation modes.
  21. 根据权利要求17所述的具有增强现实效果的投影系统,其特征在于,所述互动操控单元与所述空间识别单元共用所述监控模块的部分或全部功能。 The projection system with an augmented reality effect according to claim 17, wherein the interactive manipulation unit and the space recognition unit share part or all of the functions of the monitoring module.
PCT/CN2015/090974 2015-09-28 2015-09-28 Projection method and system with augmented reality effect WO2017054115A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/090974 WO2017054115A1 (en) 2015-09-28 2015-09-28 Projection method and system with augmented reality effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/090974 WO2017054115A1 (en) 2015-09-28 2015-09-28 Projection method and system with augmented reality effect

Publications (1)

Publication Number Publication Date
WO2017054115A1 true WO2017054115A1 (en) 2017-04-06

Family

ID=58422550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/090974 WO2017054115A1 (en) 2015-09-28 2015-09-28 Projection method and system with augmented reality effect

Country Status (1)

Country Link
WO (1) WO2017054115A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111629190A (en) * 2019-02-28 2020-09-04 中强光电股份有限公司 Projection system and projection method
CN113495406A (en) * 2020-04-01 2021-10-12 中强光电股份有限公司 Interactive projection system and interactive display method of projection system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316297A (en) * 2010-07-08 2012-01-11 株式会社泛泰 Image output device and the method for using this image output device output image
CN103246351A (en) * 2013-05-23 2013-08-14 刘广松 User interaction system and method
CN103294886A (en) * 2012-02-22 2013-09-11 方铭 System for reproducing virtual objects
US20140226167A1 (en) * 2013-02-08 2014-08-14 Keio University Method and Apparatus for Calibration of Multiple Projector Systems
CN104427282A (en) * 2013-09-10 2015-03-18 索尼公司 Information processing apparatus, information processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102316297A (en) * 2010-07-08 2012-01-11 株式会社泛泰 Image output device and the method for using this image output device output image
CN103294886A (en) * 2012-02-22 2013-09-11 方铭 System for reproducing virtual objects
US20140226167A1 (en) * 2013-02-08 2014-08-14 Keio University Method and Apparatus for Calibration of Multiple Projector Systems
CN103246351A (en) * 2013-05-23 2013-08-14 刘广松 User interaction system and method
CN104427282A (en) * 2013-09-10 2015-03-18 索尼公司 Information processing apparatus, information processing method, and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111629190A (en) * 2019-02-28 2020-09-04 中强光电股份有限公司 Projection system and projection method
CN113495406A (en) * 2020-04-01 2021-10-12 中强光电股份有限公司 Interactive projection system and interactive display method of projection system
US11538373B2 (en) 2020-04-01 2022-12-27 Coretronic Corporation Interactive projection system and interactive display method of projection system

Similar Documents

Publication Publication Date Title
CN105182662B (en) Projecting method and system with augmented reality effect
US11734867B2 (en) Detecting physical boundaries
CN109791442B (en) Surface modeling system and method
JP6009502B2 (en) Information processing apparatus and information processing method
US10831278B2 (en) Display with built in 3D sensing capability and gesture control of tv
US8558873B2 (en) Use of wavefront coding to create a depth image
JP5865910B2 (en) Depth camera based on structured light and stereoscopic vision
CN113711109A (en) Head mounted display with through imaging
JP5430572B2 (en) Gesture-based user interaction processing
US20150042640A1 (en) Floating 3d image in midair
JP6697986B2 (en) Information processing apparatus and image area dividing method
US20160343166A1 (en) Image-capturing system for combining subject and three-dimensional virtual space in real time
WO2019123729A1 (en) Image processing device, image processing method, and program
KR20160147495A (en) Apparatus for controlling interactive contents and method thereof
WO2012126103A1 (en) Apparatus and system for interfacing with computers and other electronic devices through gestures by using depth sensing and methods of use
US20130285919A1 (en) Interactive video system
CN105282535B (en) 3D stereo projection systems and its projecting method under three-dimensional space environment
WO2019003383A1 (en) Information processing device and method for specifying quality of material
JP2022546053A (en) Virtual mirror system and method
WO2017054115A1 (en) Projection method and system with augmented reality effect
JP6682624B2 (en) Image processing device
Piérard et al. I-see-3d! an interactive and immersive system that dynamically adapts 2d projections to the location of a user's eyes
KR101002071B1 (en) Apparatus for touching a projection of 3d images on an infrared screen using multi-infrared camera
Luo Study on three dimensions body reconstruction and measurement by using kinect
WO2017054114A1 (en) Display system and display method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15905027

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15905027

Country of ref document: EP

Kind code of ref document: A1