WO2021035891A1 - 基于增强现实技术的投影方法及投影设备 - Google Patents

基于增强现实技术的投影方法及投影设备 Download PDF

Info

Publication number
WO2021035891A1
WO2021035891A1 PCT/CN2019/110873 CN2019110873W WO2021035891A1 WO 2021035891 A1 WO2021035891 A1 WO 2021035891A1 CN 2019110873 W CN2019110873 W CN 2019110873W WO 2021035891 A1 WO2021035891 A1 WO 2021035891A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
area
information
image information
projectable
Prior art date
Application number
PCT/CN2019/110873
Other languages
English (en)
French (fr)
Inventor
杨伟樑
高志强
李祥
李文祥
丁明内
Original Assignee
广景视睿科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广景视睿科技(深圳)有限公司 filed Critical 广景视睿科技(深圳)有限公司
Publication of WO2021035891A1 publication Critical patent/WO2021035891A1/zh
Priority to US17/530,860 priority Critical patent/US20220078385A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/317Convergence or focusing systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/54Accessories
    • G03B21/56Projection screens
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B2206/00Systems for exchange of information between different pieces of apparatus, e.g. for exchanging trimming information, for photo finishing
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B37/00Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • the embodiments of the present application relate to the technical field of projection equipment, and in particular to a projection method and projection equipment based on augmented reality technology.
  • Augmented reality technology is a new technology that “seamlessly” integrates real world information and virtual world information. It combines physical information (visual information, sound, taste, Touch, etc.), through computer and other science and technology, simulation and then superimposed, the virtual information is applied to the real world, perceived by the human senses, so as to achieve a sensory experience beyond reality.
  • the real environment and virtual objects are superimposed on the same screen or space in real time.
  • Augmented reality technology not only displays real-world information, but also displays virtual information at the same time. The two types of information complement and overlap each other.
  • users can use the helmet display to combine the real world with computer graphics, and then they can see the real world surrounding it.
  • Augmented reality technology includes new technologies and new methods such as multimedia, three-dimensional modeling, real-time video display and control, multi-sensor fusion, real-time tracking, and scene fusion. Augmented reality provides information that is different from what humans can perceive under normal circumstances.
  • the related technology has at least the following problems: the current augmented reality requires more troublesome body-wearing facilities, the user is inconvenient to wear, not agile, and cannot provide the user with a good experience.
  • embodiments of the present application provide a projection method and projection device based on augmented reality technology that do not need to wear existing body-wearing facilities and improve user experience.
  • a projection method based on augmented reality technology applied to a projection device, the projection device can project a projection object
  • the projection method based on augmented reality technology includes:
  • the constructing a three-dimensional virtual space model according to the image information includes:
  • the three-dimensional virtual space model is constructed.
  • said performing stitching processing on the image information to obtain panoramic image information includes:
  • the overlapping parts of the two adjacent image information are spliced to obtain the panoramic image information.
  • the determining the optimal projection area according to the three-dimensional virtual space model includes:
  • the imaging area is detected, and the best projection area is determined.
  • the detecting the imaging area and determining the optimal projection area includes:
  • the optimal projection area is determined.
  • the step of classifying the projectable area to obtain projectable areas of different levels includes:
  • the size information classify the projectable area to obtain projectable areas of different grades.
  • the detecting the size information of the projectable area includes:
  • the detection radius corresponding to the size detection area is increased by a preset length, and the increased size detection area is used to continue to use the increased size detection area for the projection. Area to be tested.
  • determining the best projection area according to the projection object and the projection areas of different levels including
  • the optimal projection area is determined.
  • the method further includes:
  • said performing image correction on the projection object includes:
  • the preset rotation information includes a preset rotation angle and a preset rotation direction
  • the generating correction rotation information according to the preset rotation information includes:
  • a correction rotation direction opposite to the preset rotation direction is generated, and the correction rotation angle and the correction rotation direction constitute the correction rotation information.
  • said performing image correction on the projection object includes:
  • the method further includes:
  • the performing automatic focusing on the projection device includes:
  • the preset movement information includes a preset movement direction and a preset movement distance
  • the projection device is automatically focused.
  • the projection device includes: at least one processor; and
  • a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor
  • the device can be used to perform the projection method based on augmented reality technology as described above.
  • the projection method based on augmented reality technology can collect image information of the real space in the early stage, construct a three-dimensional virtual space model based on the image information, and then construct a three-dimensional virtual space model based on the three-dimensional virtual space model. , Determine the best projection area, and then project the projection object to the best projection area to achieve "seamless" integration of real world information and virtual world information.
  • the above method does not require the user to wear complex body wear facilities, which improves user experience.
  • Figure 1 is a schematic diagram of an application environment of an embodiment of the application
  • FIG. 2 is a schematic flowchart of a projection method based on augmented reality technology provided by an embodiment of the application;
  • Fig. 3 is a schematic diagram of the flow of S20 in Fig. 2;
  • FIG. 4 is a schematic diagram of the flow of S211 in FIG. 3;
  • FIG. 5 is a schematic diagram of the flow of S30 in FIG. 2;
  • FIG. 6 is a schematic diagram of the flow of S32 in FIG. 5;
  • FIG. 7 is a schematic diagram of the flow of S322 in FIG. 6;
  • FIG. 8 is a schematic flowchart of one embodiment of S50 in FIG. 2;
  • FIG. 9 is a schematic flowchart of another embodiment of S50 in FIG. 2;
  • FIG. 10 is a structural block diagram of a projection device based on augmented reality technology provided by an embodiment of the application.
  • FIG. 11 is a structural block diagram of a projection device provided by an embodiment of the application.
  • the embodiment of the application provides a projection method based on augmented reality technology, which is applied to a projection device, and the projection device can project a projection object.
  • the method can collect the image information of the real space in the early stage, and construct according to the image information.
  • the three-dimensional virtual space model is then determined according to the three-dimensional virtual space model to determine the best projection area, and then the projection object is projected to the best projection area to achieve "seamless" integration of real world information and virtual world information.
  • the method does not require the user to wear complicated body-wearing facilities, which improves the user experience.
  • the following examples illustrate the application environment of the projection method based on augmented reality technology.
  • FIG. 1 is a schematic diagram of an application environment of a projection method based on augmented reality technology provided by an embodiment of the present application; as shown in FIG. 1, the application scene includes a projection device 10, a real space 20, a projection object 30 and a user 40.
  • the projection device 10 is located in the real space 20, and the projection device 10 can project the projection object 30 into the real space 20, and then apply the virtual projection object 30 to the real world, which is controlled by the senses of the user 40 Perception, so as to achieve a sensory experience that transcends reality.
  • the projection device 10 has a built-in memory, and the projection information of the projection object 30 is stored in the memory.
  • the projection information includes the size, movement direction, rotation angle, etc. of the projection object 30.
  • the projection device 10 can project the projection information corresponding to the projection object 30 to the display space.
  • the projection device 10 can also collect image information of the real space 20; construct a three-dimensional virtual space model according to the image information; determine the optimal projection area according to the three-dimensional virtual space model; project the projection object 30 To the best projection area.
  • the projection device 10 includes a processor, a memory, a projection unit, a short-range wireless communication unit, and a network communication unit.
  • the processor is a processing device that controls the corresponding unit of the projection device 10.
  • the processing device may also be used to collect image information in the real space 20; construct a three-dimensional virtual space model according to the image information; determine the optimal projection area according to the three-dimensional virtual space model; project the projection object 30 to The best projection area.
  • the memory is a memory that stores data and the like required for the operation of the processor, and the projection information of the projection object 30 is stored in the memory, and the projection information includes the size, movement direction, rotation angle, etc. of the projection object 30.
  • the projection device 10 can project the projection information corresponding to the projection object 30 to the display space.
  • the projection unit projects the projection information of the projection object 30 stored in the memory onto the display space.
  • the projection unit uses a light source (such as a lamp and a laser) to project an image onto the projection surface of the display space.
  • a light source such as a lamp and a laser
  • dot-like drawing is performed by scanning on the projection surface of the display space, it is possible to focus on all positions of the projection surface without brightening the black portion.
  • the projection device 10 further includes a gyroscope sensor and an acceleration sensor. By combining the detection results of the gyroscope sensor and the acceleration sensor, the preset motion of the projection device 10 can be obtained by the projection device 10 Information; the preset movement information includes a preset movement direction and a preset movement distance.
  • the projection device 10 further includes an image capturing device, such as a digital single-lens reflex camera, and the image capturing device is used to collect image information of the real space 20.
  • the real space 20 refers to an objectively existing physical space, and the physical space is a three-dimensional space with three dimensions of length, width, and height.
  • the real space 20 has a projection area, such as a wall, a floor, etc., and the projection device 10 can project the projection object 30 onto the projection area.
  • Fig. 2 is an embodiment of a projection method based on augmented reality technology provided by an embodiment of the application. As shown in Figure 2, the projection method based on augmented reality technology includes the following steps:
  • the image information of the display space may be collected by an image capturing device, and the image capturing device may be a digital single-lens reflex camera.
  • the real space refers to an objectively existing physical space
  • the physical space is a three-dimensional space with three dimensions of length, width, and height.
  • a projection area in the real space such as a wall, a floor, etc., and the projection device can project the projection object onto the projection area.
  • the image information is not necessarily the image itself captured by the image capturing device, but may also be a corrected image obtained by applying correction based on lens characteristic information so as to suppress distortion of the image itself.
  • the lens characteristic refers to information indicating the lens distortion characteristic of the lens attached to the camera that captures the image information.
  • the lens characteristic information may be a known distortion characteristic of the corresponding lens, a distortion characteristic obtained by calibration, or a distortion characteristic obtained by performing image processing on the image information. It should be noted that the aforementioned lens distortion characteristics may include not only barrel distortion and pincushion distortion, but also distortion caused by special lenses such as fisheye lenses.
  • the image information is first spliced to obtain panoramic image information, and then the three-dimensional size data of the real space is parsed according to the panoramic image information, and then based on the image information and the three-dimensional size data, Constructing the three-dimensional virtual space model.
  • the imaging area obtained according to the three-dimensional virtual space model is first detected to determine the projection area, and then the projection area is classified into different levels to obtain different levels of projection areas, and finally according to the projection object and the different levels
  • the projectable area of determines the best projection area.
  • the projection device has a built-in memory, and the projection information of the projection object is stored in the memory, and the projection information includes the size, movement direction, rotation angle, etc. of the projection object.
  • the projection device may project projection information corresponding to the projection object to the display space.
  • the projection device includes a processor, a memory, a projection unit, a short-range wireless communication unit, and a network communication unit.
  • the processor is a processing device that controls the corresponding unit of the projection device.
  • the processing device may also be used to collect image information in the real space; construct a three-dimensional virtual space model according to the image information; determine the optimal projection area according to the three-dimensional virtual space model; project the projection object to the The best projection area.
  • the memory is a memory that stores data and the like required for the operation of the processor, and the projection information of the projection object is stored in the memory, and the projection information includes the size, movement direction, rotation angle, etc. of the projection object.
  • the projection device may project projection information corresponding to the projection object to the display space.
  • the projection unit projects the projection information of the projection object stored in the memory onto the display space.
  • the projection unit uses a light source (such as a lamp and a laser) to project an image onto the projection surface of the display space.
  • a light source such as a lamp and a laser
  • dot-like drawing is performed by scanning on the projection surface of the display space, it is possible to focus on all positions of the projection surface without brightening the black portion.
  • the projection device further includes a gyroscope sensor and an acceleration sensor.
  • the preset motion information of the projection device can be obtained by the projection device;
  • the preset motion information includes a preset moving direction and a preset moving distance.
  • the projection device further includes an image capturing device, such as a digital SLR camera, and the image capturing device is used to collect image information in a real space.
  • the embodiment of the application provides a projection method based on augmented reality technology.
  • the method can collect image information of real space in the early stage, construct a three-dimensional virtual space model according to the image information, and then, according to the three-dimensional virtual space model, Determine the best projection area, and then project the projection object to the best projection area to achieve "seamless" integration of real world information and virtual world information.
  • the above method does not require users to wear complex body-wearing facilities, which improves users Experience.
  • S20 includes the following steps:
  • the image capturing device can capture multiple pieces of image information, so multiple pieces of the image information need to be processed to obtain the panoramic image information.
  • each piece of the image information corresponds to a collection time point (shooting time), so the image information can be arranged in sequence according to the collection time point in chronological order or different perspectives, and then adjacent The overlapping parts of the two image information are spliced to obtain the panoramic image information.
  • the splicing process uses image splicing technology, which is a technology of splicing several images with overlapping parts (which may be obtained at different times, different viewing angles, or different sensors) into a seamless panoramic image or high-resolution image.
  • Image alignment and image fusion are two key technologies for image stitching.
  • Image registration is the basis of image fusion, and the amount of calculation of image registration algorithms is generally very large, so the development of image stitching technology largely depends on the innovation of image registration technology.
  • Early image registration techniques mainly used point matching methods, which were slow and low-precision, and often required manual selection of initial matching points, which could not adapt to the fusion of images with large amounts of data.
  • image stitching mainly includes the following five steps: 1. Image information preprocessing, which includes the basic operations of digital image processing (such as denoising, edge extraction, histogram processing, etc.), establishing an image matching template, and Perform certain transformations on the image (such as Fourier transform, wavelet transform, etc.) and other operations. 2. Image information registration. Image information registration is to use a certain matching strategy to find out the corresponding position of the template or feature point in the image to be spliced in the reference image, and then determine the transformation relationship between the two images. 3. Establish image information to establish a transformation model, and image information to establish a transformation model.
  • Image information unified coordinate transformation image information unified coordinate transformation
  • image information fusion reconstruction merges the overlapping areas of the images to be spliced to obtain smooth and seamless panoramic image information for splicing and reconstruction.
  • S22 Analyze the three-dimensional size data of the real space according to the panoramic image information.
  • the panoramic image information records the continuous parallax of the real space in a unique imaging manner, and hides the scene of the real space in between. Therefore, depth extraction calculation and error analysis can be performed according to the panoramic image information, and the three-dimensional size data corresponding to the real space can be obtained.
  • S23 Construct the three-dimensional virtual space model according to the panoramic image information and the three-dimensional size data.
  • the panoramic image information includes multiple physical image information, and the physical image information refers to physical image information obtained by taking pictures of physical objects (walls, floors, tables and chairs, etc.) in a real space. Then, the three-dimensional virtual space model is constructed according to the physical image information and the corresponding three-dimensional size data.
  • S21 includes the following steps:
  • each image information corresponds to a collection time point
  • the collection time point is the shooting time corresponding to the image information generated.
  • the collection time point corresponding to image information 1 is t1
  • the collection time point corresponding to image information 2 is t2
  • the collection time point corresponding to image information 3 is t3
  • the collection time point corresponding to image information 4 is t4.
  • S212 Arrange the image information in sequence according to the collection time point.
  • the collection time points are arranged in chronological order, and the image information corresponding to each collection time point may be arranged in chronological order.
  • the time sequence of the collection time point is t1
  • the collection time point is t2
  • the collection time point is t3
  • the collection time point is t4 is t4, t3, t2, and t1.
  • the graphics information corresponding to each time collection point with the time sequence of t4, t3, t2, and t1 can be sequentially arranged into image information 4, image information 3, image information 2, and image information 1.
  • S213 Perform splicing processing on overlapping parts of two adjacent image information to obtain the panoramic image information.
  • the overlapping parts of two adjacent pieces of image information may be spliced to obtain the panoramic image information.
  • the two adjacent image information 4 and image information 3 are stitched together
  • the adjacent two image information 3 and image information 2 are stitched together
  • the adjacent two image information 2 and image information 1 are stitched together.
  • the stitching process finally obtains the panoramic image information, and the panoramic image information includes image information 1, image information 2, image information 3, and image information 4.
  • S30 includes the following steps:
  • S31 Determine an imaging area according to the three-dimensional virtual space model.
  • the three-dimensional virtual space model includes multiple virtual physical models, and the multiple virtual physical modules are the three-dimensional virtual physical models constructed based on physical image information and corresponding three-dimensional size data.
  • Each of the three-dimensional physical models has corresponding size information (length, width, and height), and the projected area can be determined according to the corresponding size information of each three-dimensional physical model, and the size of the projected area can be further determined according to the size of each three-dimensional physical model. Determine the imaging area.
  • first detect the imaging area determine the projectable area, classify the projectable area, obtain different levels of projectable areas, and determine the best projection according to the projection object and different levels of projectable areas area.
  • S32 includes the following steps:
  • S321 Detect the imaging area, and determine a projectable area.
  • the imaging area corresponds to length information
  • the area size of the imaging area can be obtained according to the length information of the imaging area. Then, according to whether the size of the imaging area meets the preset projection area, the projectable area can be determined. For example, if the area of the imaging area is smaller than the preset projection area, the imaging area cannot be used as a projectable area. For another example, if the area of the imaging area is greater than or equal to the preset projection area, the imaging area may be used as a projectable area.
  • S322 Classify the projectable area to obtain projectable areas of different levels.
  • the area of the projectable area is obtained according to the size information of the projectable area, and then the projectable area is classified according to the size of the area to obtain different levels of projectable areas. It can be understood that the higher the level, the larger the area of the projectable area.
  • S323 Determine an optimal projection area according to the projection object and different levels of projection areas.
  • the area of the first level is 300-400cm 2
  • the area of the second level is 500-600cm 2
  • the area of the third level is 700-800cm 2.
  • the area of the fourth-level projectable area is 900-1000cm 2
  • the areas of the first-level projectable area, the second-level projectable area and the third-level projectable area in the different levels of projectable areas are all smaller than
  • the minimum projectable area is 900 cm 2 , so the first-level projectable area, the second-level projectable area, and the third-level projectable area are not optimal projection areas.
  • Area of the fourth stage may be different levels of the projection area of the projection region may be greater than the minimum 900cm 2 projected area of 800cm 2, the projection area of the fourth stage may be optimal projection region.
  • S322 includes the following steps:
  • S3222 According to the size information, classify the projectable area to obtain projectable areas of different levels.
  • the area of the projectable area is obtained, and according to the obtained size of the area of the projectable area, different levels of projectable areas are determined.
  • the preset region is the first stage projectable area 300-400cm 2
  • the projection area of the second stage may region is 500-600cm 2
  • the projection area of the third stage may region is 700-800cm 2
  • the fourth stage may be area of the projection region is 900-1000cm 2
  • the projection region may be divided into a projection region may be the second stage.
  • S3221 includes the following steps:
  • the detection radius corresponding to the size detection area is increased by a preset length, and the increased size detection area is used to continue to use the increased size detection area for the projection. Area to be tested.
  • the method after projecting the projection object to the optimal projection area, the method includes the following steps:
  • the preset rotation information corresponding to the projection object is acquired, the correction rotation information is generated according to the preset rotation information, and the image correction is performed on the projection object according to the correction information.
  • S50 includes the following steps:
  • the preset rotation information includes a preset rotation angle and a preset rotation direction.
  • the preset rotation information of the projection object is pre-stored in the memory of the projection device.
  • S53 Generate corrected rotation information according to the preset rotation information.
  • the correction rotation information is generated according to the preset rotation angle and the preset rotation direction.
  • the correction rotation information includes correction rotation angle and correction rotation direction. It can be understood that the corrected rotation angle and the preset rotation angle have the same size.
  • the correction rotation direction is opposite to the preset rotation direction.
  • the generating correction rotation information according to the preset rotation information includes: generating a correction rotation angle that is the same as the preset rotation angle; generating a correction rotation direction opposite to the preset rotation direction, the correction rotation angle And the correction rotation direction constitute the correction rotation information.
  • S55 Perform image correction on the projection object according to the correction information.
  • the rotation angle and the rotation direction of the projection object are corrected according to the correction rotation angle and the correction rotation direction.
  • S50 includes the following steps:
  • the preset rotation information includes a preset rotation angle and a preset rotation direction.
  • the preset rotation information of the projection device is pre-stored in the memory of the projection device.
  • S54 Generate picture deformation information of the projection object according to the preset rotation information.
  • the picture deformation information of the projection object is generated according to the preset rotation angle and the preset rotation direction.
  • the picture deformation information includes a picture deformation angle and a picture deformation direction. It can be understood that the screen deformation angle and the preset rotation angle are the same in size.
  • the picture deformation direction is opposite to the preset rotation direction.
  • S56 Perform image correction on the projection object according to the picture deformation information.
  • the rotation angle and rotation direction of the projection object are corrected according to the screen deformation angle and the screen deformation direction.
  • the method after projecting the projection object to the optimal projection area, the method includes the following steps:
  • the projection device obtains the distance information between the projection center point of the projection device in the three-dimensional virtual space model and the projection device according to the three-dimensional virtual space model; obtain preset motion information of the projection device;
  • the preset movement information includes a preset movement direction and a preset movement distance; according to the distance information and the preset movement information, the projection device is automatically focused.
  • the embodiments of the present application provide a projection device 50 based on augmented reality technology.
  • the projection device 50 based on augmented reality technology includes: an image information acquisition module 51, a three-dimensional virtual space model construction module 52, an optimal projection area determination module 53, and a projection module 54.
  • the image information collection module 51 is used to collect image information in the real space.
  • the three-dimensional virtual space model construction module 52 is used to construct a three-dimensional virtual space model according to the image information.
  • the optimal projection area determination module 53 is configured to determine the optimal projection area according to the three-dimensional virtual space model.
  • the projection module 54 is used to project the projection object to the optimal projection area.
  • the image information of the real space can be collected in the early stage, a three-dimensional virtual space model can be constructed according to the image information, and then the optimal projection area can be determined according to the three-dimensional virtual space model, and then the projection The object is projected to the optimal projection area to achieve "seamless" integration of real world information and virtual world information.
  • the above method does not require the user to wear complex body-wearing facilities, which improves user experience.
  • the above-mentioned projection device based on augmented reality technology can execute the projection method based on augmented reality technology provided in the embodiments of the present application, and has functional modules and beneficial effects corresponding to the implementation method.
  • the projection method based on the augmented reality technology provided in the embodiment of the present application.
  • FIG. 11 is a structural block diagram of a projection device 100 provided by an embodiment of the application.
  • the projection device 100 can be used to realize the functions of all or part of the functional modules in the main control chip.
  • the projection device 100 may include: a processor 110, a memory 120, and a communication module 130.
  • the processor 110, the memory 120, and the communication module 130 establish a communication connection between any two through a bus.
  • the processor 110 may be of any type, and has one or more processing cores. It can perform single-threaded or multi-threaded operations, and is used to parse instructions to perform operations such as obtaining data, performing logical operation functions, and issuing operation processing results.
  • the memory 120 can be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as those corresponding to the projection method based on augmented reality technology in the embodiment of the present application.
  • Program instructions/modules for example, the image information acquisition module 51, the three-dimensional virtual space model construction module 52, the optimal projection area determination module 53, and the projection module 54 shown in FIG. 10).
  • the processor 110 executes various functional applications and data processing of the projection device 50 based on augmented reality technology by running the non-transitory software programs, instructions, and modules stored in the memory 120, that is, the implementation of any of the foregoing method embodiments is based on The projection method of augmented reality technology.
  • the memory 120 may include a storage program area and a storage data area.
  • the storage program area may store an operating system and an application program required by at least one function; the storage data area may store information created according to the use of the projection device 50 based on augmented reality technology. Data etc.
  • the memory 120 may include a high-speed random access memory, and may also include a non-transitory memory, such as at least one magnetic disk storage device, a flash memory device, or other non-transitory solid-state storage devices.
  • the memory 120 may optionally include a memory remotely provided with respect to the processor 110, and these remote memories may be connected to the projection device 10 via a network. Examples of the aforementioned networks include, but are not limited to, the Internet, corporate intranets, local area networks, mobile communication networks, and combinations thereof.
  • the memory 120 stores instructions that can be executed by the at least one processor 110; the at least one processor 110 is configured to execute the instructions to implement the projection method based on augmented reality technology in any of the foregoing method embodiments, for example , Execute the method steps 10, 20, 30, 40 and so on described above to realize the functions of the modules 51-54 in FIG. 10.
  • the communication module 130 is a functional module used to establish a communication connection and provide a physical channel.
  • the communication module 130 may be any type of wireless or wired communication module 130, including but not limited to a WiFi module or a Bluetooth module.
  • the embodiments of the present application also provide a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors 110 is executed, for example, executed by one of the processors 110 in FIG. 11, so that the above-mentioned one or more processors 110 execute the projection method based on augmented reality technology in any of the above-mentioned method embodiments, for example, execute the above-described method step 10 , 20, 30, 40, etc., to realize the functions of modules 51-54 in Figure 10.
  • the device embodiments described above are merely illustrative, where the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network units. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each implementation manner can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • a person of ordinary skill in the art can understand that all or part of the processes in the methods of the foregoing embodiments can be implemented by instructing relevant hardware by a computer program in a computer program product.
  • the computer program can be stored in a non-transitory computer.
  • the computer program includes program instructions, and when the program instructions are executed by a related device, the related device can execute the flow of the foregoing method embodiments.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
  • the above-mentioned product can execute the projection method based on the augmented reality technology provided by the embodiments of the application, and has the corresponding functional modules and beneficial effects for executing the projection method based on the augmented reality technology.
  • the projection method based on augmented reality technology provided in the embodiment of this application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例涉及一种基于增强现实技术的投影方法及投影设备(10)。应用于投影设备(10)的方法包括:首先前期采集现实空间(20)的图像信息,根据图像信息,构建三维虚拟空间模型,然后根据三维虚拟空间模型,确定最佳投影区域,进而将投影对象(30)投影到最佳投影区域,实现真实世界信息和虚拟世界信息"无缝"集成,上述方法不需要用户佩戴复杂的体戴设施,提高了用户体验。

Description

基于增强现实技术的投影方法及投影设备 技术领域
本申请实施例涉及投影设备技术领域,尤其涉及一种基于增强现实技术的投影方法及投影设备。
背景技术
增强现实技术是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,是把原本在现实世界的一定时间空间范围内很难体验到的实体信息(视觉信息,声音,味道,触觉等),通过电脑等科学技术,模拟仿真后再叠加,将虚拟的信息应用到真实世界,被人类感官所感知,从而达到超越现实的感官体验。真实的环境和虚拟的物体实时地叠加到了同一个画面或空间同时存在。
增强现实技术,不仅展现了真实世界的信息,而且将虚拟的信息同时显示出来,两种信息相互补充、叠加。在视觉化的增强现实中,用户利用头盔显示器,把真实世界与电脑图形多重合成在一起,便可以看到真实的世界围绕着它。增强现实技术包含了多媒体、三维建模、实时视频显示及控制、多传感器融合、实时跟踪、场景融合等新技术与新手段。增强现实提供了在一般情况下,不同于人类可以感知的信息。
在实现本申请的过程中,申请人发现相关技术至少存在以下问题:目前的增强现实需要比较麻烦的体戴设施,用户佩戴不方便,不敏捷,无法提供给用户很好的体验。
发明内容
为了解决上述技术问题,本申请实施例提供一种无需佩戴现有体戴设施,提高用户体验的的基于增强现实技术的投影方法及投影设备。
为解决上述技术问题,本申请实施例提供以下技术方案:一种基于增强现实技术的投影方法,应用于投影设备,所述投影设备可投影投影对象,所述基于增强现实技术的投影方法包括:
采集现实空间的图像信息;
根据所述图像信息,构建三维虚拟空间模型;
根据所述三维虚拟空间模型,确定最佳投影区域;
将所述投影对象投影到所述最佳投影区域。
可选地,所述根据所述图像信息,构建三维虚拟空间模型,包括:
将所述图像信息进行拼接处理,得到全景图像信息;
根据所述全景图像信息,解析出所述现实空间的三维尺寸数据;
根据所述全景图像信息和所述三维尺寸数据,构建所述三维虚拟空间模型。
可选地,所述将所述图像信息进行拼接处理,得到全景图像信息,包括:
提取所述图像信息对应的采集时间点;
根据所述采集时间点,将所述图像信息依次顺序排列;
将相邻的两个所述图像信息的重合部分进行拼接处理,得到所述全景图像信息。
可选地,所述根据所述三维虚拟空间模型,确定最佳投影区域,包括:
根据所述三维虚拟空间模型,确定成像区域;
检测所述成像区域,确定最佳投影区域。
可选地,所述检测所述成像区域,确定最佳投影区域,包括:
检测所述成像区域,确定可投影区域;
对所述可投影区域进行等级划分,得到不同等级的可投影区域;
根据所述投影对象和不同等级的可投影区域,确定最佳投影区域。
可选地,所述对所述可投影区域进行等级划分,得到不同等级的可投影区域,包括:
检测所述可投影区域的尺寸信息;
根据所述尺寸信息,对所述可投影区域进行等级划分,得到不同等级的可投影区域。
可选地,所述检测所述可投影区域的尺寸信息,包括:
利用尺寸检测区域对所述可投影区域进行检测,所述尺寸检测区域对应一检测半径,以所述检测半径形成对应的所述尺寸检测区域;
当所述尺寸检测区域的面积小于所述可投影区域的面积时,将所述尺寸检测区域对应的检测半径增大预设长度,继续利用增大后的所述尺寸检测区 域对所述可投影区域进行检测。
可选地,根据所述投影对象和不同等级的所述可投影区域,确定最佳投影区域,包括
获取所述投影对象的尺寸信息和/或运动信息;
根据所述尺寸信息和/或运动信息及不同等级的所述可投影区域,确定最佳投影区域。
可选地,所述将所述投影对象投影到所述最佳投影区域之后,还包括:
对所述投影对象进行图像校正。
可选地,所述对所述投影对象进行图像校正,包括:
获取所述投影对象对应的预设旋转信息;
根据所述预设旋转信息,生成校正旋转信息;
根据所述校正信息,对所述投影对象进行图像校正。
可选地,所述预设旋转信息包括预设旋转角度及预设旋转方向;
所述根据所述预设旋转信息,生成校正旋转信息,包括:
生成与所述预设旋转角度相同的校正旋转角度;
生成与所述预设旋转方向相反的校正旋转方向,所述校正旋转角度和所述校正旋转方向组成所述校正旋转信息。
可选地,所述对所述投影对象进行图像校正,包括:
获取所述投影设备的预置转动信息;
根据所述预置转动信息,生成所述投影对象的画面变形信息;
根据所述画面变形信息,对所述投影对象进行图像校正。
可选地,所述将所述投影对象投影到所述最佳投影区域之后,还包括:
对所述投影设备进行自动对焦。
可选地,所述对所述投影设备进行自动对焦,包括:
根据所述三维虚拟空间模型,得到所述投影设备在所述三维虚拟空间模型中的投影中心点与所述投影设备的距离信息;
获取所述投影设备的预置运动信息;所述预置运动信息包括预置移动方向和预置移动距离;
根据所述距离信息和所述预置运动信息,对所述投影设备进行自动对焦。
为解决上述技术问题,本申请实施例还提供以下技术方案:一种投影设 备。所述投影设备包括:至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行如上所述的基于增强现实技术的投影方法。
与现有技术相比较,本申请实施例的提供基于增强现实技术的投影方法可以通过前期采集现实空间的图像信息,根据所述图像信息,构建三维虚拟空间模型,然后根据所述三维虚拟空间模型,确定最佳投影区域,进而将所述投影对象投影到所述最佳投影区域,实现真实世界信息和虚拟世界信息“无缝”集成,上述方法不需要用户佩戴复杂的体戴设施,提高了用户体验。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图示出的结构获得其他的附图。
图1为本申请实施例的应用环境示意图;
图2为本申请实施例提供的基于增强现实技术的投影方法的流程示意图;
图3是图2中S20的流程示意图;
图4是图3中S211的流程示意图;
图5是图2中S30的流程示意图;
图6是图5中S32的流程示意图;
图7是图6中S322的流程示意图;
图8是图2中S50其中一实施例的流程示意图;
图9是图2中S50另一实施例的流程示意图;
图10为本申请实施例提供的基于增强现实技术的投影装置的结构框图;
图11为本申请实施例提供的投影设备的结构框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行 清楚、完整地描述,显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。若本申请实施例中有涉及“第一”、“第二”等的描述,则该“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。
为了便于理解本申请,下面结合附图和具体实施例,对本申请进行更详细的说明。需要说明的是,当元件被表述“固定于”另一个元件,它可以直接在另一个元件上、或者其间可以存在一个或多个居中的元件。当一个元件被表述“连接”另一个元件,它可以是直接连接到另一个元件、或者其间可以存在一个或多个居中的元件。本说明书所使用的术语“上”、“下”、“内”、“外”、“底部”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。此外,术语“第一”、“第二”“第三”等仅用于描述目的,而不能理解为指示或暗示相对重要性。
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本说明书中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是用于限制本申请。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
此外,下面所描述的本申请不同实施例中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
本申请实施例提供了一种基于增强现实技术的投影方法,应用于投影设备,所述投影设备可投影投影对象,所述方法可以通过前期采集现实空间的 图像信息,根据所述图像信息,构建三维虚拟空间模型,然后根据所述三维虚拟空间模型,确定最佳投影区域,进而将所述投影对象投影到所述最佳投影区域,实现真实世界信息和虚拟世界信息“无缝”集成,上述方法不需要用户佩戴复杂的体戴设施,提高了用户体验。
以下举例说明所述基于增强现实技术的投影方法的应用环境。
图1是本申请实施例提供的基于增强现实技术的投影方法的应用环境的示意图;如图1所示,所述应用场景包括投影设备10、现实空间20、投影对象30及用户40。所述投影设备10位于所述现实空间20内,并且所述投影设备10可将所述投影对象30投影至现实空间20,进而将虚拟的投影对象30应用到真实世界,被用户40的感官所感知,从而达到超越现实的感官体验。
所述投影设备10内置有存储器,存储器内存储有所述投影对象30的投影信息,所述投影信息包括投影对象30的大小、运动方向、旋转角度等等。所述投影设备10可将所述投影对象30对应的投影信息投影至所述显示空间。同时,所述投影设备10还可以采集现实空间20的图像信息;根据所述图像信息,构建三维虚拟空间模型;根据所述三维虚拟空间模型,确定最佳投影区域;将所述投影对象30投影到所述最佳投影区域。
具体地,所述投影设备10包括包括处理器、存储器、投影单元、短程无线通信单元及网络通信单元。处理器是控制投影设备10的相应单元的处理设备。所述处理设备还可以用于采集现实空间20的图像信息;根据所述图像信息,构建三维虚拟空间模型;根据所述三维虚拟空间模型,确定最佳投影区域;将所述投影对象30投影到所述最佳投影区域。存储器是存储处理器操作所需的的数据等的存储器,所述存储器内存储有所述投影对象30的投影信息,所述投影信息包括投影对象30的大小、运动方向、旋转角度等等。所述投影设备10可将所述投影对象30对应的投影信息投影至所述显示空间。投影单元将存储器内存储有所述投影对象30的投影信息投影到显示空间上。该投影单元使用光源(诸如,灯和激光器)将图像投影到显示空间的投影面上。具体地,在使用激光源的情况下,由于通过在显示空间的投影面上进行扫描执行点状绘制,因此可以在投影面的所有位置聚焦而不使黑色部分变亮。
在一些实施例中,所述投影设备10还包括陀螺仪传感器及加速度传感器,通过结合陀螺仪传感器和加速度传感器的检测结果,可以得到所述投影设备 10的获取所述投影设备10的预置运动信息;所述预置运动信息包括预置移动方向和预置移动距离。在一些实施例中,所述投影设备10还包括图像拍摄装置,例如数字的单反摄像头,所述图像拍摄装置用于采集现实空间20的图像信息。
所述现实空间20是指客观存在的实体空间,所述实体空间是三维空间,具有长、宽、高三种度量。所述现实空间20中有可投影区域,例如:墙面、地板等,所述投影设备10可将所述投影对象30投影到所述可投影区域。
图2为本申请实施例提供的基于增强现实技术的投影方法的实施例。如图2所示,该基于增强现实技术的投影方法包括如下步骤:
S10、采集现实空间的图像信息。
具体地,可通过图像拍摄装置采集显示空间的图像信息,所述图像拍摄装置可为数字的单反摄像头。
其中,所述现实空间是指客观存在的实体空间,所述实体空间是三维空间,具有长、宽、高三种度量。所述现实空间中有可投影区域,例如:墙面、地板等,所述投影设备可将所述投影对象投影到所述可投影区域。
其中,所述图像信息不一定是由所述图像拍摄装置拍摄的图像本身,也可以是通过基于镜头特性信息施加校正以便抑制图像本身的失真而得到的校正图像。在此,镜头特性是指表示装配于拍摄所述图像信息的摄像机的镜头的镜头失真特性的信息。镜头特性信息可以是对应的镜头的已知的失真特性,也可以是通过校准得到的失真特性,还可以是通过对图像信息进行图像处理等而得到的失真特性。需要说明的是,上述镜头失真特性中不仅可以包含桶形失真、枕形失真,还可以包含由鱼眼镜头等特殊镜头引起的失真。
S20、根据所述图像信息,构建三维虚拟空间模型。
具体地,首先将所述图像信息进行拼接处理,得到全景图像信息,进而根据所述全景图像信息,解析出所述现实空间的三维尺寸数据,然后根据所述图像信息和所述三维尺寸数据,构建所述三维虚拟空间模型。
S30、根据所述三维虚拟空间模型,确定最佳投影区域。
具体地,首先检测根据所述三维虚拟空间模型得到的成像区域,确定可投影区域,然后对所述可投影区域进行等级划分,得到不同等级的可投影区域,最终根据所述投影对象和不同等级的可投影区域,确定最佳投影区域。
S40、将所述投影对象投影到所述最佳投影区域。
具体地,所述投影设备内置有存储器,存储器内存储有所述投影对象的投影信息,所述投影信息包括投影对象的大小、运动方向、旋转角度等等。所述投影设备可将所述投影对象对应的投影信息投影至所述显示空间。
具体地,所述投影设备包括包括处理器、存储器、投影单元、短程无线通信单元及网络通信单元。处理器是控制投影设备的相应单元的处理设备。所述处理设备还可以用于采集现实空间的图像信息;根据所述图像信息,构建三维虚拟空间模型;根据所述三维虚拟空间模型,确定最佳投影区域;将所述投影对象投影到所述最佳投影区域。存储器是存储处理器操作所需的的数据等的存储器,所述存储器内存储有所述投影对象的投影信息,所述投影信息包括投影对象的大小、运动方向、旋转角度等等。所述投影设备可将所述投影对象对应的投影信息投影至所述显示空间。投影单元将存储器内存储有所述投影对象的投影信息投影到显示空间上。该投影单元使用光源(诸如,灯和激光器)将图像投影到显示空间的投影面上。具体地,在使用激光源的情况下,由于通过在显示空间的投影面上进行扫描执行点状绘制,因此可以在投影面的所有位置聚焦而不使黑色部分变亮。
在一些实施例中,所述投影设备还包括陀螺仪传感器及加速度传感器,通过结合陀螺仪传感器和加速度传感器的检测结果,可以得到所述投影设备的获取所述投影设备的预置运动信息;所述预置运动信息包括预置移动方向和预置移动距离。在一些实施例中,所述投影设备还包括图像拍摄装置,例如数字的单反摄像头,所述图像拍摄装置用于采集现实空间的图像信息。
本申请实施例提供了一种基于增强现实技术的投影方法,所述方法可以通过前期采集现实空间的图像信息,根据所述图像信息,构建三维虚拟空间模型,然后根据所述三维虚拟空间模型,确定最佳投影区域,进而将所述投影对象投影到所述最佳投影区域,实现真实世界信息和虚拟世界信息“无缝”集成,上述方法不需要用户佩戴复杂的体戴设施,提高了用户体验。
为了更好的根据所述图像信息,构建三维虚拟空间模型,在一些实施例中,请参阅图3,S20包括如下步骤:
S21:将所述图像信息进行拼接处理,得到全景图像信息。
具体地,所述图像拍摄装置可拍摄多个图像信息,因此需要将多个所述 图像信息进行处理,以得到所述全景图像信息。
具体地,每一个所述图像信息均对应于一个采集时间点(拍摄时间),因此可根据所述采集时间点按照时间顺序或不同视角,将所述图像信息依次顺序排列,然后将相邻的两个所述图像信息的重合部分进行拼接处理,得到所述全景图像信息。
其中,所述拼接处理是利用图像拼接技术就是将数张有重叠部分的图像(可能是不同时间、不同视角或者不同传感器获得的)拼成一幅无缝的全景图或高分辨率图像的技术。图像配准(image alignment)和图像融合是图像拼接的两个关键技术。图像配准是图像融合的基础,而且图像配准算法的计算量一般非常大,因此图像拼接技术的发展很大程度上取决于图像配准技术的创新。早期的图像配准技术主要采用点匹配法,这类方法速度慢、精度低,而且常常需要人工选取初始匹配点,无法适应大数据量图像的融合。图像拼接的方法很多,不同的算法步骤会有一定差异,但大致的过程是相同的。一般来说,图像拼接主要包括以下五步:1、图像信息预处理,图像信息预处理包括数字图像处理的基本操作(如去噪、边缘提取、直方图处理等)、建立图像的匹配模板以及对图像进行某种变换(如傅里叶变换、小波变换等)等操作。2、图像信息配准,图像信息配准就是采用一定的匹配策略,找出待拼接图像中的模板或特征点在参考图像中对应的位置,进而确定两幅图像之间的变换关系。3、建立图像信息建立变换模型,图像信息建立变换模型根据模板或者图像特征之间的对应关系,计算出数学模型中的各参数值,从而建立两幅图像的数学变换模型。4、图像信息统一坐标变换,图像信息统一坐标变换根据建立的数学转换模型,将待拼接图像转换到参考图像的坐标系中,完成统一坐标变换。5、图像信息融合重构,图像信息融合重构将待拼接图像的重合区域进行融合得到拼接重构的平滑无缝全景图像信息。
S22:根据所述全景图像信息,解析出所述现实空间的三维尺寸数据。
具体地,所述全景图像信息以独特地成像方式记录着所述现实空间的连续视差,并将所述现实空间的场景隐藏其间。因此可以根据所述全景图像信息进行深度提取计算和误差分析,得到所述现实空间对应的三维尺寸数据。
S23:根据所述全景图像信息和所述三维尺寸数据,构建所述三维虚拟空间模型。
所述全景图像信息中包括多个实物图像信息,所述实物图像信息是指将现实空间中的实物(墙壁、地面及桌椅等)进行拍摄图片而得到的实物图像信息。然后根据实物图像信息与对应的三维尺寸数据,构建所述三维虚拟空间模型。
为了更好的将所述图像信息进行拼接处理,得到全景图像信息,在一些实施例中,请参阅图4,S21包括如下步骤:
S211:提取所述图像信息对应的采集时间点。
具体地,每个图像信息分别对应于一个采集时间点,所述采集时间点即生成所述图像信息对应的拍摄时刻。例如,图像信息1对应的采集时间点为t1,图像信息2对应的采集时间点为t2,图像信息3对应的采集时间点为t3,图像信息4对应的采集时间点为t4。
S212:根据所述采集时间点,将所述图像信息依次顺序排列。
具体地,将所述采集时间点依照时间先后顺序进行排列,进而可将每个所述采集时间点对应的图像信息按照时间先后顺序排列。例如,采集时间点为t1、采集时间点为t2、采集时间点为t3及采集时间点为t4的时间先后顺序为t4、t3、t2及t1。进而可根据时间先后顺序为t4、t3、t2及t1的每个时间采集点对应的图形信息进行依次顺序排列为图像信息4、图像信息3、图像信息2及图像信息1。
S213:将相邻的两个所述图像信息的重合部分进行拼接处理,得到所述全景图像信息。
具体地,由于每两个相邻的所述图像信息均有重合部分,进而可将将相邻的两个所述图像信息的重合部分进行拼接处理,得到所述全景图像信息。例如,将相邻的两个图像信息4及图像信息3进行拼接处理,将相邻的两个图像信息3及图像信息2进行拼接处理,将相邻的两个图像信息2及图像信息1进行拼接处理,最终得到所述全景图像信息,所述全景图像信息包括图像信息1、图像信息2、图像信息3及图像信息4。
为了根据所述三维虚拟空间模型,确定最佳投影区域,在一些实施例中,请参阅图5,S30包括如下步骤:
S31:根据所述三维虚拟空间模型,确定成像区域。
具体地,所述三维虚拟空间模型中包括多个虚拟实物模型,多个所述虚 拟实物模块是根据实物图像信息与对应的三维尺寸数据,构建出的所述三维虚拟实物模型。每个所述三维实物模型具有对应的尺寸信息(长度、宽度及高度),进而可根据每个所述三维实物模型具有对应的尺寸信息确定其投影面积,进而可根据所述投影面积的大小,确定成像区域。
S32:检测所述成像区域,确定最佳投影区域。
具体地,首先检测所述成像区域,确定可投影区域,对所述可投影区域进行等级划分,得到不同等级的可投影区域,根据所述投影对象和不同等级的可投影区域,确定最佳投影区域。
为了更好的根据所述检测所述成像区域,确定最佳投影区域,在一些实施例中,请参阅图6,S32包括如下步骤:
S321:检测所述成像区域,确定可投影区域。
具体地,所述成像区域对应有长度信息,进而可根据成像区域的长度信息得到所述成像区域的面积大小。然后根据所述成像区域的面积大小是否符合预设投影面积,则可确定所述可投影区域。例如,所述成像区域的面积小于预设投影面积,则所述成像区域不可作为可投影区域。又例如,所述成像区域的面积大于或等于预设投影面积,则所述成像区域可作为可投影区域。
S322:对所述可投影区域进行等级划分,得到不同等级的可投影区域。
具体地,根据所述可投影区域的尺寸信息得到所述可投影区域的面积,进而根据面积大小,对所述可投影区域进行等级划分,得到不同等级的可投影区域。可以理解的是,越高等级的所述可投影区域的面积越大。
S323:根据所述投影对象和不同等级的可投影区域,确定最佳投影区域。
具体地,获取所述投影对象的尺寸信息和/或运动信息,进而根据所述尺寸信息和/或运动信息及不同等级的所述可投影区域,确定最佳投影区域。例如,所述投影对象的尺寸信息的长度和宽度分别为30cm及20cm,所述运动信息中的运动距离为10cm,即可得到所述投影对象的所需的可最小可投影区域面积为(30+10)*20=800cm 2,相应地,其中不同等级的可投影区域中的可投影区域面积大于所述最小可投影区域面积的即为最佳投影区域。例如,不同等级的可投影区域中,第一级可投影区域的面积为300-400cm 2,第二级可投影区域的面积为500-600cm 2,第三级可投影区域的面积为700-800cm 2,第四级可投影区域的面积为900-1000cm 2,所述不同等级的可投影区域中的第一 级可投影区域、第二级可投影区域和第三级可投影区域的面积均小于所述最小可投影区域面积900cm 2,所以第一级可投影区域、第二级可投影区域和第三级可投影区域均不是最佳投影区域。所述不同等级的可投影区域中的第四级可投影区域的面积900cm 2大于所述最小可投影区域面积800cm 2,则所述第四级可投影区域为最佳投影区域。
为了更好的根据对所述可投影区域进行等级划分,得到不同等级的可投影区域,在一些实施例中,请参阅图7,S322包括如下步骤:
S3221:检测所述可投影区域的尺寸信息。
S3222:根据所述尺寸信息,对所述可投影区域进行等级划分,得到不同等级的可投影区域。
具体地,根据所述可投影区域的尺寸信息,得到可投影区域面积,根据得到的所述可投影区域面积的大小,来确定不同等级的可投影区域。例如,预设第一级可投影区域的面积为300-400cm 2,第二级可投影区域的面积为500-600cm 2,第三级可投影区域的面积为700-800cm 2,第四级可投影区域的面积为900-1000cm 2,若检测到所述可投影区域的面积的大小为600cm 2,则所述可投影区域被划分为所述第二级可投影区域。
为了更为准确的检测所述可投影区域的尺寸信息,在一些实施例中,S3221包括如下步骤:
利用尺寸检测区域对所述可投影区域进行检测,所述尺寸检测区域对应一检测半径,以所述检测半径形成对应的所述尺寸检测区域;
当所述尺寸检测区域的面积小于所述可投影区域的面积时,将所述尺寸检测区域对应的检测半径增大预设长度,继续利用增大后的所述尺寸检测区域对所述可投影区域进行检测。
在一些实施例中,在将所述投影对象投影到所述最佳投影区域之后,所述方法包括如下步骤:
S50、对所述投影对象进行图像校正。
具体地,获取所述投影对象对应的预设旋转信息,根据所述预设旋转信息,生成校正旋转信息,根据所述校正信息,对所述投影对象进行图像校正。
为了更好的对所述投影对象进行图像校正,在一些实施例中,请参阅图8,S50包括如下步骤:
S51:获取所述投影对象对应的预设旋转信息。
其中,所述预设旋转信息包括预设旋转角度及预设旋转方向。所述投影对象的所述预设旋转信息预存于所述投影设备的存储器内。
S53:根据所述预设旋转信息,生成校正旋转信息。
具体地,根据所述预设旋转角度及预设旋转方向,生成校正旋转信息。所述校正旋转信息包括校正旋转角度及校正旋转方向。可以理解的是,所述校正旋转角度和所述预设旋转角度大小相同。所述校正旋转方向与所述预设旋转方向相反。所述根据所述预设旋转信息,生成校正旋转信息,包括:生成与所述预设旋转角度相同的校正旋转角度;生成与所述预设旋转方向相反的校正旋转方向,所述校正旋转角度和所述校正旋转方向组成所述校正旋转信息。
S55:根据所述校正信息,对所述投影对象进行图像校正。
具体地,根据所述校正旋转角度和所述校正旋转方向,对所述投影对象的旋转角度和旋转方向进行校正。
为了更好的对所述投影对象进行图像校正,在一些实施例中,请参阅图9,S50包括如下步骤:
S52:获取所述投影设备的预置转动信息。
其中,所述预置转动信息包括预置转动角度及预置转动方向。所述投影设备的预置转动信息预存于所述投影设备的存储器内。
S54:根据所述预置转动信息,生成所述投影对象的画面变形信息。
具体地,根据所述预置转动角度及预置转动方向,生成所述投影对象的画面变形信息。所述画面变形信息包括画面变形角度及画面变形方向。可以理解的是,所述画面变形角度和所述预置转动角度大小相同。所述画面变形方向与所述预置转动方向相反。
S56:根据所述画面变形信息,对所述投影对象进行图像校正。
具体地,根据所述画面变形角度及画面变形方向,对所述投影对象的旋转角度和旋转方向进行校正。
在一些实施例中,在将所述投影对象投影到所述最佳投影区域之后,所述方法包括如下步骤:
S60:对所述投影设备进行自动对焦。
具体地,根据所述三维虚拟空间模型,得到所述投影设备在所述三维虚拟空间模型中的投影中心点与所述投影设备的距离信息;获取所述投影设备的预置运动信息;所述预置运动信息包括预置移动方向和预置移动距离;根据所述距离信息和所述预置运动信息,对所述投影设备进行自动对焦。
需要说明的是,在上述各个实施例中,上述各步骤之间并不必然存在一定的先后顺序,本领域普通技术人员,根据本申请实施例的描述可以理解,不同实施例中,上述各步骤可以有不同的执行顺序,亦即,可以并行执行,亦可以交换执行等等。
作为本申请实施例的另一方面,本申请实施例提供一种基于增强现实技术的投影装置50。请参阅图10,该基于增强现实技术的投影装置50包括:图像信息采集模块51、三维虚拟空间模型构建模块52、最佳投影区域确定模块53以及投影模块54。
图像信息采集模块51用于采集现实空间的图像信息。
三维虚拟空间模型构建模块52用于根据所述图像信息,构建三维虚拟空间模型。
最佳投影区域确定模块53用于根据所述三维虚拟空间模型,确定最佳投影区域。
投影模块54用于将所述投影对象投影到所述最佳投影区域。
因此,在本实施例中,可以通过前期采集现实空间的图像信息,根据所述图像信息,构建三维虚拟空间模型,然后根据所述三维虚拟空间模型,确定最佳投影区域,进而将所述投影对象投影到所述最佳投影区域,实现真实世界信息和虚拟世界信息“无缝”集成,上述方法不需要用户佩戴复杂的体戴设施,提高了用户体验。
需要说明的是,上述基于增强现实技术的投影装置可执行本申请实施例所提供的基于增强现实技术的投影方法,具备执行方法相应的功能模块和有益效果。未在基于增强现实技术的投影装置实施例中详尽描述的技术细节,可参见本申请实施例所提供的基于增强现实技术的投影方法。
图11为本申请实施例提供的投影设备100的结构框图。该投影设备100可以用于实现所述主控芯片中的全部或者部分功能模块的功能。如图14所示,该投影设备100可以包括:处理器110、存储器120以及通信模块130。
所述处理器110、存储器120以及通信模块130之间通过总线的方式,建立任意两者之间的通信连接。
处理器110可以为任何类型,具备一个或者多个处理核心的处理器110。其可以执行单线程或者多线程的操作,用于解析指令以执行获取数据、执行逻辑运算功能以及下发运算处理结果等操作。
存储器120作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态性计算机可执行程序以及模块,如本申请实施例中的基于增强现实技术的投影方法对应的程序指令/模块(例如,附图10所示的图像信息采集模块51、三维虚拟空间模型构建模块52、最佳投影区域确定模块53以及投影模块54)。处理器110通过运行存储在存储器120中的非暂态软件程序、指令以及模块,从而执行基于增强现实技术的投影装置50的各种功能应用以及数据处理,即实现上述任一方法实施例中基于增强现实技术的投影方法。
存储器120可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据基于增强现实技术的投影装置50的使用所创建的数据等。此外,存储器120可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。在一些实施例中,存储器120可选包括相对于处理器110远程设置的存储器,这些远程存储器可以通过网络连接至投影设备10。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述存储器120存储有可被所述至少一个处理器110执行的指令;所述至少一个处理器110用于执行所述指令,以实现上述任意方法实施例中基于增强现实技术的投影方法,例如,执行以上描述的方法步骤10、20、30、40等等,实现图10中的模块51-54的功能。
通信模块130是用于建立通信连接,提供物理信道的功能模块。通信模块130以是任何类型的无线或者有线通信模块130,包括但不限于WiFi模块或者蓝牙模块等。
进一步地,本申请实施例还提供了一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,该计算机可执行指 令被一个或多个处理器110执行,例如,被图11中的一个处理器110执行,可使得上述一个或多个处理器110执行上述任意方法实施例中基于增强现实技术的投影方法,例如,执行以上描述的方法步骤10、20、30、40等等,实现图10中的模块51-54的功能。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域普通技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序产品中的计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非暂态计算机可读取存储介质中,该计算机程序包括程序指令,当所述程序指令被相关设备执行时,可使相关设备执行上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
上述产品可执行本申请实施例所提供的基于增强现实技术的投影方法,具备执行基于增强现实技术的投影方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的基于增强现实技术的投影方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (15)

  1. 一种基于增强现实技术的投影方法,应用于投影设备,所述投影设备可投影投影对象,其特征在于,包括:
    采集现实空间的图像信息;
    根据所述图像信息,构建三维虚拟空间模型;
    根据所述三维虚拟空间模型,确定最佳投影区域;
    将所述投影对象投影到所述最佳投影区域。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述图像信息,构建三维虚拟空间模型,包括:
    将所述图像信息进行拼接处理,得到全景图像信息;
    根据所述全景图像信息,解析出所述现实空间的三维尺寸数据;
    根据所述全景图像信息和所述三维尺寸数据,构建所述三维虚拟空间模型。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述图像信息进行拼接处理,得到全景图像信息,包括:
    提取所述图像信息对应的采集时间点;
    根据所述采集时间点,将所述图像信息依次顺序排列;
    将相邻的两个所述图像信息的重合部分进行拼接处理,得到所述全景图像信息。
  4. 根据权利要求1所述的方法,其特征在于,所述根据所述三维虚拟空间模型,确定最佳投影区域,包括:
    根据所述三维虚拟空间模型,确定成像区域;
    检测所述成像区域,确定最佳投影区域。
  5. 根据权利要求4所述的方法,其特征在于,所述检测所述成像区域,确定最佳投影区域,包括:
    检测所述成像区域,确定可投影区域;
    对所述可投影区域进行等级划分,得到不同等级的可投影区域;
    根据所述投影对象和不同等级的可投影区域,确定最佳投影区域。
  6. 根据权利要求5所述的方法,其特征在于,所述对所述可投影区域进 行等级划分,得到不同等级的可投影区域,包括:
    检测所述可投影区域的尺寸信息;
    根据所述尺寸信息,对所述可投影区域进行等级划分,得到不同等级的可投影区域。
  7. 根据权利要求6所述的方法,其特征在于,所述检测所述可投影区域的尺寸信息,包括:
    利用尺寸检测区域对所述可投影区域进行检测,所述尺寸检测区域对应一检测半径,以所述检测半径形成对应的所述尺寸检测区域;
    当所述尺寸检测区域的面积小于所述可投影区域的面积时,将所述尺寸检测区域对应的检测半径增大预设长度,继续利用增大后的所述尺寸检测区域对所述可投影区域进行检测。
  8. 根据权利要求7所述的方法,其特征在于,根据所述投影对象和不同等级的所述可投影区域,确定最佳投影区域,包括
    获取所述投影对象的尺寸信息和/或运动信息;
    根据所述尺寸信息和/或运动信息及不同等级的所述可投影区域,确定最佳投影区域。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述将所述投影对象投影到所述最佳投影区域之后,还包括:
    对所述投影对象进行图像校正。
  10. 根据权利要求9所述的方法,其特征在于,所述对所述投影对象进行图像校正,包括:
    获取所述投影对象对应的预设旋转信息;
    根据所述预设旋转信息,生成校正旋转信息;
    根据所述校正信息,对所述投影对象进行图像校正。
  11. 根据权利要求10所述的方法,其特征在于,所述预设旋转信息包括预设旋转角度及预设旋转方向;
    所述根据所述预设旋转信息,生成校正旋转信息,包括:
    生成与所述预设旋转角度相同的校正旋转角度;
    生成与所述预设旋转方向相反的校正旋转方向,所述校正旋转角度和所述校正旋转方向组成所述校正旋转信息。
  12. 根据权利要求9所述的方法,其特征在于,所述对所述投影对象进行图像校正,包括:
    获取所述投影设备的预置转动信息;
    根据所述预置转动信息,生成所述投影对象的画面变形信息;
    根据所述画面变形信息,对所述投影对象进行图像校正。
  13. 根据权利要求1-8任一项所述的方法,其特征在于,所述将所述投影对象投影到所述最佳投影区域之后,还包括:
    对所述投影设备进行自动对焦。
  14. 根据权利要求13所述的方法,其特征在于,所述对所述投影设备进行自动对焦,包括:
    根据所述三维虚拟空间模型,得到所述投影设备在所述三维虚拟空间模型中的投影中心点与所述投影设备的距离信息;
    获取所述投影设备的预置运动信息;所述预置运动信息包括预置移动方向和预置移动距离;
    根据所述距离信息和所述预置运动信息,对所述投影设备进行自动对焦。
  15. 一种投影设备,其特征在于,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行如权利要求1-14中任一项所述的基于增强现实技术的投影方法。
PCT/CN2019/110873 2019-08-29 2019-10-12 基于增强现实技术的投影方法及投影设备 WO2021035891A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/530,860 US20220078385A1 (en) 2019-08-29 2021-11-19 Projection method based on augmented reality technology and projection equipment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910807392.5A CN110930518A (zh) 2019-08-29 2019-08-29 基于增强现实技术的投影方法及投影设备
CN201910807392.5 2019-08-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/530,860 Continuation US20220078385A1 (en) 2019-08-29 2021-11-19 Projection method based on augmented reality technology and projection equipment

Publications (1)

Publication Number Publication Date
WO2021035891A1 true WO2021035891A1 (zh) 2021-03-04

Family

ID=69848656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110873 WO2021035891A1 (zh) 2019-08-29 2019-10-12 基于增强现实技术的投影方法及投影设备

Country Status (3)

Country Link
US (1) US20220078385A1 (zh)
CN (1) CN110930518A (zh)
WO (1) WO2021035891A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210123059A (ko) * 2020-04-02 2021-10-13 삼성전자주식회사 영상 투사 장치 및 영상 투사 장치의 제어 방법
CN111491146B (zh) * 2020-04-08 2021-11-26 上海松鼠课堂人工智能科技有限公司 用于智能教学的互动投影系统
JP7163947B2 (ja) * 2020-10-22 2022-11-01 セイコーエプソン株式会社 投写領域の設定支援方法、設定支援システム、及びプログラム
CN112702587A (zh) * 2020-12-29 2021-04-23 广景视睿科技(深圳)有限公司 一种智能跟踪投影方法及系统
US11942008B2 (en) 2020-12-29 2024-03-26 Iview Displays (Shenzhen) Company Ltd. Smart tracking-based projection method and system
CN113259653A (zh) * 2021-04-14 2021-08-13 广景视睿科技(深圳)有限公司 一种定制动向投影的方法、装置、设备及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300020A1 (en) * 2011-05-27 2012-11-29 Qualcomm Incorporated Real-time self-localization from panoramic images
JP6096634B2 (ja) * 2013-10-17 2017-03-15 株式会社ジオ技術研究所 仮想現実を用いた3次元地図表示システム
CN108427498A (zh) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 一种基于增强现实的交互方法和装置
CN109242958A (zh) * 2018-08-29 2019-01-18 广景视睿科技(深圳)有限公司 一种三维建模的方法及其装置
CN109615703A (zh) * 2018-09-28 2019-04-12 阿里巴巴集团控股有限公司 增强现实的图像展示方法、装置及设备

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4702072B2 (ja) * 2006-01-20 2011-06-15 カシオ計算機株式会社 投影装置、投影装置の測距仰角制御方法及びプログラム
JP4838746B2 (ja) * 2007-03-19 2011-12-14 富士フイルム株式会社 コンテンツ表示方法、プログラム、及び装置、並びに記録媒体
US20110106439A1 (en) * 2009-11-04 2011-05-05 In-Tai Huang Method of displaying multiple points of interest on a personal navigation device
US9723293B1 (en) * 2011-06-21 2017-08-01 Amazon Technologies, Inc. Identifying projection surfaces in augmented reality environments
US9336607B1 (en) * 2012-11-28 2016-05-10 Amazon Technologies, Inc. Automatic identification of projection surfaces
US9036943B1 (en) * 2013-03-14 2015-05-19 Amazon Technologies, Inc. Cloud-based image improvement
KR102077105B1 (ko) * 2013-09-03 2020-02-13 한국전자통신연구원 사용자 인터랙션을 위한 디스플레이를 설계하는 장치 및 방법
US9965030B2 (en) * 2014-07-31 2018-05-08 Samsung Electronics Co., Ltd. Wearable glasses and method of displaying image via the wearable glasses
US10462421B2 (en) * 2015-07-20 2019-10-29 Microsoft Technology Licensing, Llc Projection unit
CN105182662B (zh) * 2015-09-28 2017-06-06 神画科技(深圳)有限公司 具有增强现实效果的投影方法及系统
CN106445169A (zh) * 2016-10-24 2017-02-22 福建北极光虚拟视觉展示科技有限公司 一种基于动态触发源的增强现实交互系统
CN106993174B (zh) * 2017-05-24 2019-04-05 青岛海信宽带多媒体技术有限公司 一种投影设备电动对焦方法及装置
CN107222732A (zh) * 2017-07-11 2017-09-29 京东方科技集团股份有限公司 自动投影方法以及投影机器人
CN109005394B (zh) * 2018-09-19 2019-11-29 青岛海信激光显示股份有限公司 一种投影图像的校正方法及投影机
US10841544B2 (en) * 2018-09-27 2020-11-17 Rovi Guides, Inc. Systems and methods for media projection surface selection
US11245883B2 (en) * 2018-12-17 2022-02-08 Lightform, Inc. Method for augmenting surfaces in a space with visual content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300020A1 (en) * 2011-05-27 2012-11-29 Qualcomm Incorporated Real-time self-localization from panoramic images
JP6096634B2 (ja) * 2013-10-17 2017-03-15 株式会社ジオ技術研究所 仮想現実を用いた3次元地図表示システム
CN108427498A (zh) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 一种基于增强现实的交互方法和装置
CN109242958A (zh) * 2018-08-29 2019-01-18 广景视睿科技(深圳)有限公司 一种三维建模的方法及其装置
CN109615703A (zh) * 2018-09-28 2019-04-12 阿里巴巴集团控股有限公司 增强现实的图像展示方法、装置及设备

Also Published As

Publication number Publication date
CN110930518A (zh) 2020-03-27
US20220078385A1 (en) 2022-03-10

Similar Documents

Publication Publication Date Title
WO2021035891A1 (zh) 基于增强现实技术的投影方法及投影设备
TWI712918B (zh) 擴增實境的影像展示方法、裝置及設備
US10872467B2 (en) Method for data collection and model generation of house
KR101566543B1 (ko) 공간 정보 증강을 이용하는 상호 인터랙션을 위한 방법 및 시스템
EP2583449B1 (en) Mobile and server-side computational photography
TWI554976B (zh) 監控系統及其影像處理方法
JP5538617B2 (ja) 複数カメラのキャリブレーション用の方法および構成
US20170076430A1 (en) Image Processing Method and Image Processing Apparatus
JP6220486B1 (ja) 3次元モデル生成システム、3次元モデル生成方法、及びプログラム
TW201915944A (zh) 圖像處理方法、裝置、系統和儲存介質
US9516214B2 (en) Information processing device and information processing method
US20130335535A1 (en) Digital 3d camera using periodic illumination
WO2020042970A1 (zh) 一种三维建模的方法及其装置
WO2019128109A1 (zh) 一种基于人脸追踪的动向投影方法、装置及电子设备
US20140267427A1 (en) Projector, method of controlling projector, and program thereof
JP6352208B2 (ja) 三次元モデル処理装置およびカメラ校正システム
JPWO2018179040A1 (ja) カメラパラメータ推定装置、方法およびプログラム
KR20170027266A (ko) 영상 촬영 장치 및 그 동작 방법
JP2015119277A (ja) 表示機器、表示方法及び表示プログラム
CN105791663A (zh) 距离估算系统及距离估算方法
CN112073640B (zh) 全景信息采集位姿获取方法及装置、系统
CN110191284B (zh) 对房屋进行数据采集的方法、装置、电子设备和存储介质
WO2020153264A1 (ja) 校正方法および校正装置
JP6483661B2 (ja) 撮像制御装置、撮像制御方法およびプログラム
CN111064947A (zh) 基于全景的视频融合方法、系统、装置和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19942745

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.07.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19942745

Country of ref document: EP

Kind code of ref document: A1