WO2022141826A1 - 一种智能跟踪投影方法及系统 - Google Patents

一种智能跟踪投影方法及系统 Download PDF

Info

Publication number
WO2022141826A1
WO2022141826A1 PCT/CN2021/082412 CN2021082412W WO2022141826A1 WO 2022141826 A1 WO2022141826 A1 WO 2022141826A1 CN 2021082412 W CN2021082412 W CN 2021082412W WO 2022141826 A1 WO2022141826 A1 WO 2022141826A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
unit
target
information
area
Prior art date
Application number
PCT/CN2021/082412
Other languages
English (en)
French (fr)
Inventor
李祥
李文祥
丁明内
杨伟樑
高志强
Original Assignee
广景视睿科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广景视睿科技(深圳)有限公司 filed Critical 广景视睿科技(深圳)有限公司
Priority to US17/543,943 priority Critical patent/US11942008B2/en
Publication of WO2022141826A1 publication Critical patent/WO2022141826A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof

Definitions

  • the embodiments of the present application relate to the technical field of digital projection display, and in particular, to an intelligent tracking projection method and system.
  • the main technical problem solved by the embodiments of the present application is to provide an intelligent tracking projection method and system, which can make the projection image move with the target.
  • a technical solution adopted by the embodiments of the present application is to provide an intelligent tracking projection method, which is applied to an intelligent tracking projection system, and the intelligent tracking projection system includes a spatial information acquisition unit, a target tracking A unit, a projection unit, and a driving unit, the method includes: acquiring information of the real space of the spatial information acquisition unit, constructing a 3D model of the real space, and obtaining first position information of the projection screen projected by the projection unit; Obtain the target image information of the target tracking unit, and obtain the second position information of the tracking target according to the target image information; obtain the target projection area according to the 3D model and the second position information; obtain the target projection area according to the target projection area , obtain the rotation information required by the projection unit; control the driving unit to work according to the rotation information, so that the projection image reaches the target projection area.
  • the obtaining the target projection area according to the 3D model and the second position information includes: determining at least one projectable area in the 3D model according to the second position information; According to the area of the at least one projectable area and the projection area required by the projection screen, the at least one projectable area is graded to obtain different grades of projectable areas; according to the different grades of the projectable areas Obtain the optimal projectable area; determine the target projection area as the optimal projectable area.
  • obtaining the rotation information required by the projection unit according to the target projection area includes: determining at least one rotation of the projection screen according to the target projection area and the first position information According to the length of the rotation path and the number of non-projectable areas on the rotation path, the rotation path is graded to obtain different levels of rotation paths; the optimal rotation path is obtained according to the rotation paths of different levels ; According to the optimal rotation path, the rotation information corresponding to the optimal rotation path is obtained.
  • the rotation information is a rotation angle of the projection unit.
  • the method further includes: obtaining an angle change of the projection picture according to the rotation information; obtaining a correction angle of the projection picture according to the angle change; Make corrections.
  • the method further includes: obtaining a stretching change of the projection picture according to the rotation information; obtaining a corrected stretching coefficient of the projection picture according to the stretching change; and according to the corrected stretching The coefficients correct the projected picture.
  • the method further includes: automatically focusing the projection unit.
  • the embodiment of the present application also provides an intelligent tracking projection system
  • the intelligent tracking projection system includes: a spatial information collection unit for collecting information in real space; a target tracking unit for for acquiring target image information of the tracking target; a projection unit, used for projecting to obtain a projection picture; a driving unit, connected with the projection unit, and the driving unit is used for driving the projection unit to rotate; a control unit, the control unit respectively Connecting the spatial information acquisition unit, the target tracking unit, the projection unit and the driving unit, the control unit includes: at least one processor; and a memory connected in communication with the at least one processor; wherein, The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the execution of any one of the first aspects
  • the intelligent tracking projection method includes: a spatial information collection unit for collecting information in real space; a target tracking unit for for acquiring target image information of the tracking target; a projection unit, used for projecting to obtain a projection picture; a driving unit, connected with the
  • the drive unit includes at least two stepper motors.
  • embodiments of the present application further provide a non-volatile computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, when the computer-executable instructions When executed by the processor, the processor is caused to execute the intelligent tracking projection method according to any one of the first aspects.
  • the embodiments of the present application further provide a computer program product
  • the computer program product includes a computer program stored on a computer-readable storage medium
  • the computer program includes program instructions, when When the program instructions are executed by a computer, the computer is made to execute the intelligent tracking projection method described in any one of the first aspect above.
  • the present application provides an intelligent tracking projection method, which is applied to an intelligent tracking projection system, and the method includes: acquiring information of the real space of the spatial information collection unit, Build a 3D model of the real space to obtain the first position information of the projection screen; obtain the target image information of the target tracking unit, and obtain the second position information of the tracking target according to the target image information; obtain the target projection according to the 3D model and the second position information According to the target projection area, the rotation information required by the projection unit is obtained; according to the rotation information, the driving unit is controlled to work, so that the projection image reaches the target projection area.
  • the intelligent tracking projection method can track the target, so that the projection image reaches the target projection area, so as to realize the dynamic projection.
  • FIG. 1 is a schematic structural block diagram of an intelligent tracking projection system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a hardware structure of a control unit provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an intelligent tracking projection method provided by an embodiment of the present application.
  • Fig. 4 is the concrete flow chart of step S3 in Fig. 3;
  • Fig. 5 is the concrete schematic flow chart of step S4 in Fig. 3;
  • FIG. 6 is a schematic flowchart of calibrating a projection image provided by an embodiment of the present application.
  • FIG. 7 is another schematic flowchart of calibrating a projection image provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of another intelligent tracking projection method provided by an embodiment of the present application.
  • FIG. 1 is a schematic structural block diagram of an intelligent tracking projection system provided by an embodiment of the present application.
  • the intelligent tracking projection system 100 includes: a spatial information collection unit 10, a target tracking unit 20, The projection unit 30 , the driving unit 40 and the control unit 50 are respectively connected to the spatial information acquisition unit 10 , the target tracking unit 20 , the projection unit 30 and the driving unit 40 .
  • the spatial information collecting unit 10 is used for collecting information of real space.
  • the spatial information acquisition unit 10 includes an image acquisition module and a ranging module, the image acquisition module can be used to collect image data in the real space, the ranging module can be used to collect distance and orientation data, image The data and the distance and orientation data constitute information of the real space, and the information is sent to the control unit 50, so that the control unit 50 can construct a panoramic image with the distance and orientation information, so that the control unit 50 can construct a 3D view of the real space. Model.
  • the spatial information collection unit 10 may be any other suitable information collection module, which does not need to be bound to the limitations in the embodiments of the present application.
  • the target tracking unit 20 is used to obtain target image information of the tracked target; in some embodiments, the target tracking unit 20 can be a 3D camera, a microwave radar, etc., and the target tracking unit 20 has a large detection range, in the horizontal and vertical directions.
  • the detection angle is more than 90 degrees, even close to 180 degrees, so it can be used to obtain the target image information of the tracking target, and send the target image information to the control unit 50, so that the control unit 50 can detect the tracking target according to the target image information. Presence, and obtain location information of tracked targets.
  • the target tracking unit 20 can be any other suitable sensor with depth perception capability.
  • the projection unit 30 is used for projecting to obtain a projection image.
  • the projection unit 30 may be a telephoto projector, and the telephoto projector can ensure that the projection image is projected to a relatively long distance, and can ensure the image Good size and brightness.
  • the projection unit 30 can be any other suitable device with a projection function.
  • the driving unit 40 is connected to the projection unit 30 , and the driving unit 40 is used to drive the projection unit 30 to rotate.
  • the driving unit 40 includes at least two motors, which can drive the projection unit 30 to rotate on the spot. Rotate 360 degrees, so that the light emitted by the projection unit 30 can be rotated 360 degrees with the projection unit 30 as the center.
  • the driving unit 40 further includes at least two rotating shafts and at least two encoders. Each motor is respectively connected with a rotating shaft and an encoder, the rotating shaft drives the motor to rotate, and the encoder is used to record the rotating position of the motor.
  • the motor may be a stepper motor or a servo motor.
  • the drive unit 40 can be any type of device that can rotate in both horizontal and vertical directions.
  • the drive unit 40 can also be a pan/tilt or a multi-dimensional motion table.
  • the control unit 50 is used to control the spatial information acquisition unit 10 , the target tracking unit 20 , the projection unit 30 and the driving unit 40 to work, and to process data to obtain results.
  • the control unit 50 includes: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor At least one processor executes to enable the at least one processor to perform the intelligent tracking projection method as described in any one of the following.
  • control unit 50 includes at least one processor 51 and a memory 52 communicatively connected to the at least one processor 51 , where one processor 51 is used as an example in FIG. 2 .
  • the processor 51 and the memory 52 may be connected through a bus or in other ways, and the connection through a bus is taken as an example in FIG. 2 .
  • the memory 52 as a non-volatile computer-readable storage medium, can be used to store non-volatile software programs, non-volatile computer-executable programs and modules.
  • the processor 51 executes various functional applications and data processing of the control device by running the non-volatile software programs, instructions and modules stored in the memory 52, that is, to implement the intelligent tracking projection method in any of the following method embodiments .
  • the memory 52 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the program distribution apparatus, and the like. Additionally, memory 52 may include high speed random access memory, and may also include nonvolatile memory, such as at least one magnetic disk storage device, flash memory device, or other nonvolatile solid state storage device. In some embodiments, memory 52 may optionally include memory located remotely from processor 51, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the intelligent tracking projection system 100 further includes a correction unit, and the correction unit may be any suitable device with a correction function, and the correction unit is respectively connected with the projection unit 30 and the control unit 50 .
  • the correction unit is used to correct the projection image, such as angle correction and stretch correction, so as to keep the projection image clear.
  • the intelligent tracking projection system 100 further includes a projection lens and a focusing unit, the projection lens and the focusing unit are connected, the focusing unit is connected with the control unit 50, and the control unit 50 controls the focusing unit to adjust the projection lens Move to the focus position to achieve automatic focus.
  • the intelligent tracking projection system provided by this application has a wide range of application scenarios, for example, it can be applied to various scenarios such as security, business, and entertainment.
  • FIG. 3 is a schematic flowchart of an intelligent tracking projection method provided by an embodiment of the present application, wherein the intelligent tracking projection method The method can be performed by the control unit in FIG. 1, and the intelligent tracking projection method includes:
  • Step S1 acquiring the information of the real space of the spatial information collection unit, constructing a 3D model of the real space, and obtaining the first position information of the projection screen projected by the projection unit;
  • the information of the real space includes the image information of the projection screen projected by the projection unit, so in the process of constructing the 3D model, the first position information of the projection screen can be obtained according to the image information.
  • a three-dimensional coordinate system can be preset in the 3D model, and when the first position information is determined, the center point of the projected image can be selected to calculate the first three-dimensional coordinate of the projected image in the three-dimensional coordinate system.
  • the 3D model of the real space is a one-by-one projection of the real space, and in the 3D model, all objects existing in the real real space and the characteristic parameters of the objects, such as the Parameters such as length, width, height, three-dimensional coordinates, and object spacing.
  • the size of the real space can be infinite.
  • the information of the real space can also be collected according to the real space area selected by the user.
  • the real space area selected by the user should include the projection image of the projection unit and the area where the tracking target is located. In order to perform intelligent tracking projection more accurately, in some other embodiments, the real space area should also include the area where the projection unit is located.
  • Step S2 acquiring target image information of the target tracking unit, and obtaining second position information of the tracking target according to the target image information;
  • a tracking target matching the target image information is found in the 3D model, and in the three-dimensional coordinate system preset in the 3D model, it is obtained that the tracking target is in the three-dimensional coordinates
  • the second three-dimensional coordinates of the tracking target are obtained, thereby obtaining the second position information of the tracking target.
  • the tracking target can be a person, an animal, or other movable objects
  • the projection screen can realize interactive projection with the tracking target, or realize augmented reality projection in combination with the tracking target.
  • Step S3 obtaining a target projection area according to the 3D model and the second position information
  • the 3D model and the second position information of the tracking target are combined to determine the area around the tracking target that can be projected, so as to obtain the target projection area.
  • step S3 further includes:
  • Step S31 Determine at least one projectable area in the 3D model according to the second position information
  • Step S32 According to the area of the at least one projectable area and the required projection area of the projection screen, classify the at least one projectable area into grades to obtain different grades of projectable areas;
  • Step S33 obtaining an optimal projectable area according to the different levels of projectable areas
  • Step S34 Determine the target projection area as the optimal projectable area.
  • the area around the tracking target that can be projected is detected, and then the area size of the projectable area is obtained by calculation; then, according to the area size of the projectable area and the preset projection
  • the size of the projection area of the screen is used to classify the projectable area.
  • the level of the projectable area is better.
  • select the projection area with the optimal level as the target projection area is better. In this way, the target projection area best matches the projection area of the preset projection picture, which is conducive to the complete presentation of the projection picture.
  • the projection picture of the projection unit when classifying the projectable area, it can be divided according to the volume size of the projectable area and the projected volume size of the preset projection picture, so as to obtain target projection area.
  • the projectable area when acquiring the projectable area, it is also determined according to the visible range of the user or the tracking target. Only the projection area within the visible range of the user or the tracking target can be determined as a projectable area, and then the projectable area is graded to obtain the target projection area. It can be understood that the projectable area can also be classified according to the visual range of the user or the tracking target. For example, the more area/volume of the projectable area under the visual range of the user or the tracking target, the better the grade. . In practical applications, the process of selecting the target projection area may be set according to actual needs, and it is not necessary to be bound by the limitations in the embodiments of the present application.
  • Step S4 obtaining the rotation information required by the projection unit according to the target projection area
  • the center point of the target projection area can also be selected to calculate the three-dimensional coordinates of the target projection area. Then, according to the first three-dimensional coordinates and the three-dimensional coordinates of the target, at least one rotation path of the projection screen can be determined, so as to obtain the rotation information required by the projection unit.
  • the step S4 further includes:
  • Step S41 Determine at least one rotation path of the projection screen according to the target projection area and the first position information
  • Step S42 According to the length of the rotation path and the number of non-projectable areas on the rotation path, the rotation path is graded to obtain rotation paths of different grades;
  • Step S43 obtaining the optimal rotation path according to the rotation paths of different grades
  • Step S44 Obtain rotation information corresponding to the optimal rotation path according to the optimal rotation path.
  • the shorter the length of the rotation path and the smaller the number of unprojectable areas on the rotation path the better the level of the rotation path. In this way, the rotation information corresponding to the optimal rotation path can be obtained. It is ensured that the projection image can be completely presented to the user or track the target during the entire rotation process.
  • Step S5 Control the driving unit to work according to the rotation information, so that the projection image reaches the target projection area.
  • the rotation information is the rotation angle of the projection unit
  • the driving unit is controlled to work according to the rotation angle, so as to drive the projection unit to rotate according to the rotation angle, so that the projection image can be rotated in the optimal rotation path. Rotate on the top, and finally reach the target projection area to complete the intelligent tracking projection.
  • the intelligent tracking projection method provided by the embodiment of the present application can control the movement of the projection screen to reach the target projection area according to the movement of the tracking target, thereby realizing dynamic projection.
  • the method further includes:
  • Step S61 obtaining the angle change of the projection screen according to the rotation information
  • Step S62 obtaining the correction angle of the projection screen according to the angle change
  • Step S63 Correct the projection image according to the correction angle.
  • the rotation angle of the projection screen is obtained, and the correction angle of the projection screen is generated, wherein the rotation angle includes the size of the rotation angle and the direction of the rotation angle, and the correction angle includes the size of the correction angle and the direction of the correction angle.
  • the size of the correction angle is equal to the size of the rotation angle
  • the direction of the correction angle is opposite to the direction of the rotation angle
  • the correction unit project the projection The screen is corrected.
  • the method further includes:
  • Step S71 obtaining the stretching change of the projection screen according to the rotation information
  • Step S72 obtaining a correction stretching coefficient of the projection image according to the stretching change
  • Step S73 Correct the projection image according to the correction stretching coefficient.
  • the deformation information of the projection picture is generated, and the correction stretching coefficient of each part of the projection picture is obtained, so that the correction unit can perform stretching correction on the projection picture, so that the projection picture can be accurately Projection.
  • the method further includes:
  • Step S8 Autofocus on the projection unit.
  • a correspondence table between the projection distance and the focus position of the projection lens may be established in advance.
  • the correspondence table there is only one optimal projection lens position for each projection distance, so that the projection image is the clearest.
  • Embodiments of the present application also provide a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions are executed by one or more processors, for example, a process in FIG. 2
  • the device 401 can cause the above one or more processors to execute the intelligent tracking and projection method in any of the foregoing method embodiments, for example, to execute the intelligent tracking and projection method in any of the foregoing method embodiments, for example, to execute the above-described FIG. 3 to Each step shown in FIG. 8; the functions of each device described in FIG. 1 can also be implemented.
  • Embodiments of the present application further provide a computer program product, including a computer program stored on a non-volatile computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed by a computer, all
  • the computer executes the intelligent tracking projection method in any of the above method embodiments, for example, executes the method steps of FIG. 3 to FIG. 8 described above to realize the functions of the apparatuses in FIG. 1 .
  • the present application provides an intelligent tracking projection method, which is applied to an intelligent tracking projection system.
  • the method includes: acquiring information of a real space of a spatial information collection unit, constructing a 3D model of the real space, and obtaining first position information of a projection screen; acquiring According to the target image information of the target tracking unit, the second position information of the tracking target is obtained according to the target image information; according to the 3D model and the second position information, the target projection area is obtained; according to the target projection area, the rotation information required by the projection unit is obtained; The rotation information controls the driving unit to work, so that the projection image reaches the target projection area.
  • the intelligent tracking projection method can track the target, so that the projection image reaches the target projection area, so as to realize the dynamic projection.
  • the device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physically separated unit, that is, it can be located in one place, or it can be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each embodiment can be implemented by means of software plus a general hardware platform, and certainly can also be implemented by hardware.
  • the above-mentioned technical solutions can be embodied in the form of software products in essence, or the parts that make contributions to related technologies, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic disks , CD-ROM, etc., including several instructions to perform the methods described in various embodiments or some parts of the embodiments with at least one computer device (which may be a personal computer, a server, or a network device, etc.).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Projection Apparatus (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种智能跟踪投影方法及系统,智能跟踪投影方法包括:获取空间信息采集单元(10)的现实空间的信息,构建现实空间的3D模型,得到投影画面的第一位置信息;获取目标跟踪单元(20)的目标图像信息,根据目标图像信息得到跟踪目标的第二位置信息;根据3D模型和第二位置信息,得到目标投影区域;根据目标投影区域,得到投影单元(30)所需的转动信息;根据转动信息控制驱动单元(40)工作,使投影画面到达目标投影区域。智能跟踪投影方法能够跟踪目标,使投影画面到达目标投影区域,从而实现动向投影。

Description

一种智能跟踪投影方法及系统
相关申请的交叉参考:
本申请要求于2020年12月29日提交中国专利局,申请号为202011592014.9,发明名称为“一种智能跟踪投影方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及数字投影显示技术领域,特别涉及一种智能跟踪投影方法及系统。
背景技术
随着科学技术的发展和人民生活水平的不断提高,人们对视觉感受方面的要求越来越高。一方面,人们对人机界面的显示器件的要求越来越向着多视角以及大屏高分辨率的方向发展;另一方面,在显示效果上,人们又倾向于追求增强现实、身临其境的视觉享受,在操作上,追求人工智能,方便快捷。
在实际的增强现实投影中,现阶段许多投射的画面大多是都是出于静态物体,例如桌子、墙壁等,动态场景尚未得到处理,未与环境和目标进行足够的联系和互动。
发明内容
针对现有技术的上述缺陷,本申请实施例主要解决的技术问题是提供一种智能跟踪投影方法及系统,能够使投影画面跟随目标移动。
为解决上述技术问题,第一方面,本申请实施方式采用的一个技术方案是:提供一种智能跟踪投影方法,应用于智能跟踪投影系统,所述智能跟踪投影系统包括空间信息采集单元、目标跟踪单元、投影单元、驱动单元,所述方法包括:获取所述空间信息采集单元的现实空间的信息,构建所述现实空间的3D模型,得到所述投影单元投影的投影画面 的第一位置信息;获取所述目标跟踪单元的目标图像信息,根据所述目标图像信息得到跟踪目标的第二位置信息;根据所述3D模型和所述第二位置信息,得到目标投影区域;根据所述目标投影区域,得到所述投影单元所需的转动信息;根据所述转动信息控制所述驱动单元工作,使所述投影画面到达目标投影区域。
在一些实施例中,所述根据所述3D模型和所述第二位置信息,得到目标投影区域,包括:根据所述第二位置信息,确定在所述3D模型中的至少一个可投影区域;根据所述至少一个可投影区域的面积和所述投影画面所需的投影面积,对所述至少一个可投影区域进行等级划分,得到不同等级的可投影区域;根据所述不同等级的可投影区域得到最优可投影区域;确定目标投影区域为所述最优可投影区域。
在一些实施例中,所述根据所述目标投影区域,得到所述投影单元所需的转动信息,包括:根据所述目标投影区域和所述第一位置信息确定所述投影画面的至少一条转动路径;根据所述转动路径的长度和所述转动路径上不可投影区域的数目,对所述转动路径进行等级划分,得到不同等级的转动路径;根据所述不同等级的转动路径得到最优转动路径;根据所述最优转动路径,得到所述最优转动路径对应的转动信息。
在一些实施例中,所述转动信息为所述投影单元的转动角度。
在一些实施例中,所述方法还包括:根据所述转动信息得到所述投影画面的角度变化;根据所述角度变化得到所述投影画面的校正角度;根据所述校正角度对所述投影画面进行校正。
在一些实施例中,所述方法还包括:根据所述转动信息得到所述投影画面的拉伸变化;根据所述拉伸变化得到所述投影画面的校正拉伸系数;根据所述校正拉伸系数对所述投影画面进行校正。
在一些实施例中,所述方法还包括:对所述投影单元进行自动对焦。
为解决上述技术问题,第二方面,本申请实施例还提供了一种智能跟踪投影系统,所述智能跟踪投影系统包括:空间信息采集单元,用于采集现实空间的信息;目标跟踪单元,用于获取跟踪目标的目标图像信息;投影单元,用于投影得到投影画面;驱动单元,与所述投影单元连接,所述驱动单元用于带动所述投影单元转动;控制单元,所述控制单元分别连接所述空间信息采集单元、所述目标跟踪单元、所述投影单元以及所述驱动单元,所述控制单元包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行如第一方面任一项所述的智能跟踪投影方法。
在一些实施例中,所述驱动单元包括至少两个步进电机。
为解决上述技术问题,第三方面,本申请实施例还提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被处理器所执行时,使所述处理器执行如第一方面任一项所述的智能跟踪投影方法。
为解决上述技术问题,第四方面,本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括存储在计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行如上第一方面任一项所述的智能跟踪投影方法。
本申请实施方式的有益效果是:区别于现有技术的情况,本申请提供一种智能跟踪投影方法,应用于智能跟踪投影系统,所述方法包括:获取空间信息采集单元的现实空间的信息,构建现实空间的3D模型,得到投影画面的第一位置信息;获取目标跟踪单元的目标图像信息,根 据目标图像信息得到跟踪目标的第二位置信息;根据3D模型和第二位置信息,得到目标投影区域;根据目标投影区域,得到投影单元所需的转动信息;根据转动信息控制驱动单元工作,使投影画面到达目标投影区域。该智能跟踪投影方法能够跟踪目标,使投影画面到达目标投影区域,从而实现动向投影。
附图说明
一个或多个实施例中通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件/模块和步骤表示为类似的元件/模块和步骤,除非有特别申明,附图中的图不构成比例限制。
图1是本申请实施例提供的一种智能跟踪投影系统的结构框图示意图;
图2是本申请实施例提供的一种控制单元的硬件结构示意图;
图3是本申请实施例提供的一种智能跟踪投影方法的流程示意图;
图4是图3中步骤S3的具体流程示意图;
图5是图3中步骤S4的具体流程示意图;
图6是本申请实施例提供的一种对投影画面进行校正的流程示意图;
图7是本申请实施例提供的另一种对投影画面进行校正的流程示意图;
图8是本申请实施例提供的另一种智能跟踪投影方法的流程示意图。
具体实施方式
下面结合具体实施例对本申请进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本申请,但不以任何形式限制本申请。应 当指出的是,对本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进。这些都属于本申请的保护范围。
为了便于理解本申请,下面结合附图和具体实施例,对本申请进行更详细的说明。除非另有定义,本说明书所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是用于限制本申请。本说明书所使用的术语“和/或”包括一个或多个相关的所列项目的任意的和所有的组合。
需要说明的是,如果不冲突,本申请实施例中的各个特征可以相互结合,均在本申请的保护范围之内。另外,虽然在装置示意图中进行了功能模块划分,但是在某些情况下,可以以不同于装置中的模块划分。此外,本文所采用的“第一”、“第二”等字样并不对数据和执行次序进行限定,仅是对功能和作用基本相同的相同项或相似项进行区分。
请参阅图1,图1是本申请实施例提供的一种智能跟踪投影系统的结构框图示意图,如图1所示,该智能跟踪投影系统100包括:空间信息采集单元10、目标跟踪单元20、投影单元30、驱动单元40以及控制单元50,控制单元50分别连接空间信息采集单元10、目标跟踪单元20、投影单元30以及驱动单元40。
其中,所述空间信息采集单元10用于采集现实空间的信息。在其中一些实施例中,空间信息采集单元10包括图像采集模块和测距模块,所述图像采集模块能用于采集现实空间的图像数据,所述测距模块能用于采集距离方位数据,图像数据和距离方位数据构成现实空间的信息,并将所述信息发送给控制单元50,以使控制单元50可构建出带有距离方位信息的全景图像,从而能使控制单元50构建现实空间的3D模型。 在实际应用中,空间信息采集单元10可以是其他一切合适的信息采集模块,在此不需拘泥于本申请实施例中的限定。
所述目标跟踪单元20用于获取跟踪目标的目标图像信息;在其中一些实施例中,目标跟踪单元20可以是3D摄像头、微波雷达等,并且目标跟踪单元20探测范围大,在水平和垂直方向探测角度均超过90度,甚至接近180度,因此可用于获取跟踪目标的目标图像信息,并将所述目标图像信息发送给控制单元50,以使控制单元50根据目标图像信息来检测跟踪目标的存在性、以及获取跟踪目标的位置信息。在实际应用中,目标跟踪单元20可以是其他一切合适的、具备深度感知能力的传感器。
所述投影单元30用于投影得到投影画面,具体地,所述投影单元30可以为长焦投影光机,所述长焦投影光机可以保证投影画面投影到较远的距离,且能够保证画面大小适中,亮度合适。在实际应用中,所述投影单元30可以是其他一切合适的具有投影功能的装置。
驱动单元40连接所述投影单元30,所述驱动单元40用于带动所述投影单元30转动,在其中一些实施例中,驱动单元40包括至少两个马达,能带动所述投影单元30在原地转动360度,使得投影单元30出射的光线能以投影单元30为中心转动360度。具体地,为了更准确地获取驱动单元40的转动角度,驱动单元40还包括至少两个转动轴以及至少两个编码器。每个马达分别与转动轴和编码器连接,转动轴带动马达转动,编码器用于记录马达的转动位置。所述马达可以是步进马达,也可以是伺服马达。在实际应用中,驱动单元40可以是任意类型、能够在水平和竖直两个方向转动的装置,例如驱动单元40还可以为云台或者是多维运动台。
所述控制单元50用于控制空间信息采集单元10、目标跟踪单元20、 投影单元30以及驱动单元40进行工作,以及对数据进行处理得到结果。所述控制单元50包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行如下任一项所述的智能跟踪投影方法。
请参阅图2,所述控制单元50包括至少一个处理器51,以及与至少一个处理器51通信连接的存储器52,其中,图2中以一个处理器51为例。
处理器51和存储器52可以通过总线或者其他方式连接,图2中以通过总线连接为例。
存储器52作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块。处理器51通过运行存储在存储器52中的非易失性软件程序、指令以及模块,从而执行控制装置的各种功能应用以及数据处理,即实现下述任一方法实施例中的智能跟踪投影方法。
存储器52可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据节目分发装置的使用所创建的数据等。此外,存储器52可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器52可选包括相对于处理器51远程设置的存储器,这些远程存储器可以通过网络连接至处理器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
在其他一些实施例中,所述智能跟踪投影系统100还包括校正单元,所述校正单元可以是一切合适的具有校正功能的装置,所述校正单元分 别与投影单元30和所述控制单元50连接。所述校正单元用于对投影画面进行校正,例如角度校正和拉伸校正,使投影画面保持清晰。
在其他一些实施例中,所述智能跟踪投影系统100还包括投影镜头和调焦单元,投影镜头和调焦单元连接,调焦单元和控制单元50连接,控制单元50控制调焦单元将投影镜头移动至对焦位置,从而实现自动对焦。
本申请提供的智能跟踪投影系统具有广泛的应用场景,比如可以应用于安防、商业、娱乐等多种场景。
下面结合附图详细描述本申请实施例提供的一种智能跟踪投影方法,请参阅图3,图3是本申请实施例提供的一种智能跟踪投影方法的流程示意图,其中,所述智能跟踪投影方法可由图1中的控制单元执行,所述智能跟踪投影方法包括:
步骤S1:获取所述空间信息采集单元的现实空间的信息,构建所述现实空间的3D模型,得到所述投影单元投影的投影画面的第一位置信息;
具体地,所述现实空间的信息包含了投影单元投射出的投影画面的图像信息,因此在构建3D模型的过程中,可以根据所述图像信息,得到所述投影画面的第一位置信息。进一步地,可以在3D模型中预设一个三维坐标系,在确定第一位置信息时,可以选取投影画面的中心点去计算投影画面在所述三维坐标系中的第一三维坐标。
可以理解的是,所述现实空间的3D模型是现实空间的一一印射,在所述3D模型中可以获取真实的现实空间中存在的所有物体、以及所述物体的特征参数,例如物体的长宽高、三维坐标、物体间距等参数。理论上所述现实空间的大小可以做到无限大,在实际应用中,也可以根 据用户选择的现实空间区域进行采集现实空间的信息。一般,用户选择的现实空间区域应该包括投影单元的投影画面、和跟踪目标所在的区域。为了更准确地进行智能跟踪投影,在其他一些实施例中,所述现实空间区域还应该包括投影单元所在的区域。
步骤S2:获取所述目标跟踪单元的目标图像信息,根据所述目标图像信息得到跟踪目标的第二位置信息;
具体地,根据目标跟踪单元获取的目标图像信息,在3D模型中找到与目标图像信息匹配的跟踪目标,并且在3D模型中预设好的三维坐标系中,得到跟踪目标在所述三维坐标中的第二三维坐标,从而得到跟踪目标的第二位置信息。
可以理解的是,跟踪目标可以是人、动物、或者是其他可运动的物体,投影画面可以和跟踪目标实现互动投影、或者结合跟踪目标实现增强现实投影。
步骤S3:根据所述3D模型和所述第二位置信息,得到目标投影区域;
为了能够让投影画面能随着跟踪目标的运动而移动,因此有必要对目标投影区域进行确定。例如,结合3D模型和跟踪目标的第二位置信息,去确定跟踪目标周围可进行投影的区域,从而得到目标投影区域。
具体地,请参阅图4,步骤S3还包括:
步骤S31:根据所述第二位置信息,确定在所述3D模型中的至少一个可投影区域;
步骤S32:根据所述至少一个可投影区域的面积和所述投影画面所需的投影面积,对所述至少一个可投影区域进行等级划分,得到不同等级的可投影区域;
步骤S33:根据所述不同等级的可投影区域得到最优可投影区域;
步骤S34:确定目标投影区域为所述最优可投影区域。
具体地,根据跟踪目标的第二位置信息,检测跟踪目标周围可进行投影的区域,接着,通过计算得到可投影区域的面积大小;然后,根据所述可投影区域的面积大小以及预设的投影画面的投影面积大小,对可投影区域进行等级划分,当可投影区域面积越接近预设的投影面积,则可投影区域的等级越优;最后,选择最优等级的投影区域为目标投影区域,这样,目标投影区域与预设的投影画面的投影面积是最匹配的,有利于投影画面完整地呈现。
在其他一些实施例中,若投影单元的投影画面为三维画面,则在对可投影区域等级划分时,可根据可投影区域的体积大小和预设的投影画面的投影体积大小进行划分,从而得到目标投影区域。
为了提高用户的体验感,在其他一些实施例中,在获取可投影区域时,还要根据用户或者跟踪目标的可视范围进行确定。只有在用户或者跟踪目标的可视范围内的投影区域才能被确定为可投影区域,接着,再对可投影区域进行等级划分,从而得到目标投影区域。可以理解的是,还可以根据用户或者跟踪目标的可视范围对可投影区域进行等级划分,例如,可投影区域在用户或者跟踪目标的可视范围下的面积/体积越多,则等级越优。在实际应用中,选择目标投影区域的过程可以根据实际需要进行设置,在此不需拘泥于本申请实施例中的限定。
步骤S4:根据所述目标投影区域,得到所述投影单元所需的转动信息;
类比在获取投影画面的第一位置信息对应的第一三维坐标,此时,也可以在3D模型预设的三维坐标系中,选取目标投影区域的中心点去计算目标投影区域在所述三维坐标系中的目标三维坐标;那么根据第一三维坐标和目标三维坐标,可以确定投影画面的至少一条转动路径,从 而得到投影单元所需的转动信息。
为了保证投影画面在随跟踪目标运动而转动的过程中,能够在转动路径上始终保持清晰可见的状态、而不被转动路径上的障碍物所干扰,例如所述障碍物为不可投影区域,那么在得到投影画面的转动路径后,有必要对投影画面的转动路径进行优化,即对投影单元的转动信息进行优化选择,以保证投影画面在投影单元的转动过程中能够保证完整呈现给用户或跟踪目标。具体地,请参阅图5,所述步骤S4还包括:
步骤S41:根据所述目标投影区域和所述第一位置信息确定所述投影画面的至少一条转动路径;
步骤S42:根据所述转动路径的长度和所述转动路径上不可投影区域的数目,对所述转动路径进行等级划分,得到不同等级的转动路径;
步骤S43:根据所述不同等级的转动路径得到最优转动路径;
步骤S44:根据所述最优转动路径,得到所述最优转动路径对应的转动信息。
通过对投影画面的转动路径进行等级划分,转动路径长度越短且转动路径上不可投影区域的数目越少,则转动路径的等级越优,这样,得到最优转动路径对应的转动信息,从而能够保证投影画面在整个转动过程中能完整地呈现给用户或者跟踪目标。
步骤S5:根据所述转动信息控制所述驱动单元工作,使所述投影画面到达目标投影区域。
具体地,所述转动信息为所述投影单元的转动角度,根据所述转动角度控制所述驱动单元工作,从而带动投影单元按照所述转动角度进行转动,从而让投影画面能在最优转动路径上进行转动,最终到达目标投影区域,完成智能跟踪投影。
本申请实施例提供的智能跟踪投影方法能够根据跟踪目标的运动, 控制投影画面运动到达目标投影区域,从而实现动向投影。
为了校正投影画面在转动过程中发生的角度倾斜,在其中一些实施例中,请参阅图6,在投影画面到达目标投影区域后,所述方法还包括:
步骤S61:根据所述转动信息得到所述投影画面的角度变化;
步骤S62:根据所述角度变化得到所述投影画面的校正角度;
步骤S63:根据所述校正角度对所述投影画面进行校正。
具体地,根据所述转动信息,得到投影画面的旋转角度,并生成投影画面的校正角度,其中,旋转角度包括旋转角度大小和旋转角度方向,所述校正角度包括校正角度大小和校正角度方向。可以理解的是,所述校正角度大小与所述旋转角度大小相等,所述校正角度方向与所述旋转角度方向相反,最后,根据所述校正角度的大小与方向,让校正单元对所述投影画面进行校正。
为了校正投影画面在转动过程中发生的图像变形,在其中一些实施例中,请参阅图7,在投影画面到达目标投影区域后,所述方法还包括:
步骤S71:根据所述转动信息得到所述投影画面的拉伸变化;
步骤S72:根据所述拉伸变化得到所述投影画面的校正拉伸系数;
步骤S73:根据所述校正拉伸系数对所述投影画面进行校正。
具体地,根据所述转动信息和所述目标投影区域,生成投影画面变形信息,得到投影画面每部分的校正拉伸系数,从而让校正单元对投影画面进行拉伸校正,使投影画面能准确地进行投影。
为了保证投影画面到达目标投影区域后能准确聚焦,在其中一些实施例中,在投影画面到达目标投影区域后,请参阅图8,所述方法还包括:
步骤S8:对所述投影单元进行自动对焦。
具体地,具体地,可以预先建立投影距离和投影镜头对焦位置的对 应关系表。在对应关系表中,每一个投影距离都存在唯一一个最佳的投影镜头位置,使得投影画面最清晰。通过获取投影画面所处的目标投影区域,然后根据所述目标投影区域确定投影距离,获取到投影距离后,基于对应关系表查询与所述投影距离对应的投影镜头对焦位置,最后控制调焦单元将投影镜头移动至所述对焦位置,从而实现自动对焦,由此能够保证投影画面清晰。
本申请实施例还提供了一种非易失性计算机存储介质,所述计算机存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图2中的一个处理器401,可使得上述一个或多个处理器可执行上述任意方法实施例中的智能跟踪投影方法,例如,执行上述任意方法实施例中的智能跟踪投影方法,例如,执行以上描述的图3至图8所示的各个步骤;也可实现图1所述的各个装置的功能。
本申请实施例还提供了一种计算机程序产品,包括存储在非易失性计算机可读存储介质上的计算程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行上述任意方法实施例中的智能跟踪投影方法,例如,执行以上描述的图3至图8的方法步骤,实现图1的各装置的功能。
本申请提供一种智能跟踪投影方法,应用于智能跟踪投影系统,所述方法包括:获取空间信息采集单元的现实空间的信息,构建现实空间的3D模型,得到投影画面的第一位置信息;获取目标跟踪单元的目标图像信息,根据目标图像信息得到跟踪目标的第二位置信息;根据3D模型和第二位置信息,得到目标投影区域;根据目标投影区域,得到投影单元所需的转动信息;根据转动信息控制驱动单元工作,使投影画面到达目标投影区域。该智能跟踪投影方法能够跟踪目标,使投影画面到达目标投影区域,从而实现动向投影。
需要说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用至少一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (11)

  1. 一种智能跟踪投影方法,应用于智能跟踪投影系统,所述智能跟踪投影系统包括空间信息采集单元、目标跟踪单元、投影单元、驱动单元,其特征在于,所述方法包括:
    获取所述空间信息采集单元的现实空间的信息,构建所述现实空间的3D模型,得到所述投影单元投影的投影画面的第一位置信息;
    获取所述目标跟踪单元的目标图像信息,根据所述目标图像信息得到跟踪目标的第二位置信息;
    根据所述3D模型和所述第二位置信息,得到目标投影区域;
    根据所述目标投影区域,得到所述投影单元所需的转动信息;
    根据所述转动信息控制所述驱动单元工作,使所述投影画面到达目标投影区域。
  2. 根据权利要求1所述的智能跟踪投影方法,其特征在于,所述根据所述3D模型和所述第二位置信息,得到目标投影区域,包括:
    根据所述第二位置信息,确定在所述3D模型中的至少一个可投影区域;
    根据所述至少一个可投影区域的面积和所述投影画面所需的投影面积,对所述至少一个可投影区域进行等级划分,得到不同等级的可投影区域;
    根据所述不同等级的可投影区域得到最优可投影区域;
    确定目标投影区域为所述最优可投影区域。
  3. 根据权利要求1所述的智能跟踪投影方法,其特征在于,所述根据所述目标投影区域,得到所述投影单元所需的转动信息,包括:
    根据所述目标投影区域和所述第一位置信息确定所述投影画面的至少一条转动路径;
    根据所述转动路径的长度和所述转动路径上不可投影区域的数目, 对所述转动路径进行等级划分,得到不同等级的转动路径;
    根据所述不同等级的转动路径得到最优转动路径;
    根据所述最优转动路径,得到所述最优转动路径对应的转动信息。
  4. 根据权利要求3所述的智能跟踪投影方法,其特征在于,所述转动信息为所述投影单元的转动角度。
  5. 根据权利要求1所述的智能跟踪投影方法,其特征在于,所述方法还包括:
    根据所述转动信息得到所述投影画面的角度变化;
    根据所述角度变化得到所述投影画面的校正角度;
    根据所述校正角度对所述投影画面进行校正。
  6. 根据权利要求5所述的智能跟踪投影方法,其特征在于,所述方法还包括:
    根据所述转动信息得到所述投影画面的拉伸变化;
    根据所述拉伸变化得到所述投影画面的校正拉伸系数;
    根据所述校正拉伸系数对所述投影画面进行校正。
  7. 根据权利要求6所述的智能跟踪投影方法,其特征在于,所述方法还包括:
    对所述投影单元进行自动对焦。
  8. 一种智能跟踪投影系统,其特征在于,所述智能跟踪投影系统包括:
    空间信息采集单元,用于采集现实空间的信息;
    目标跟踪单元,用于获取跟踪目标的目标图像信息;
    投影单元,用于投影得到投影画面;
    驱动单元,与所述投影单元连接,所述驱动单元用于带动所述投影 单元转动;
    控制单元,所述控制单元分别连接所述空间信息采集单元、所述目标跟踪单元、所述投影单元以及所述驱动单元,所述控制单元包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行如权利要求1-7中任一项所述的智能跟踪投影方法。
  9. 根据权利要求8所述的智能跟踪投影系统,其特征在于,所述驱动单元包括至少两个步进电机。
  10. 一种非易失性计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机可执行指令,当所述计算机可执行指令被处理器所执行时,使所述处理器执行如权利要求1-7任一项所述的智能跟踪投影方法。
  11. 一种计算机程序产品,其特征在于,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被电子设备执行时,使所述电子设备执行权利要求1-7任一项所述的智能跟踪投影方法。
PCT/CN2021/082412 2020-12-29 2021-03-23 一种智能跟踪投影方法及系统 WO2022141826A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/543,943 US11942008B2 (en) 2020-12-29 2021-12-07 Smart tracking-based projection method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011592014.9 2020-12-29
CN202011592014.9A CN112702587A (zh) 2020-12-29 2020-12-29 一种智能跟踪投影方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/543,943 Continuation US11942008B2 (en) 2020-12-29 2021-12-07 Smart tracking-based projection method and system

Publications (1)

Publication Number Publication Date
WO2022141826A1 true WO2022141826A1 (zh) 2022-07-07

Family

ID=75511719

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082412 WO2022141826A1 (zh) 2020-12-29 2021-03-23 一种智能跟踪投影方法及系统

Country Status (2)

Country Link
CN (1) CN112702587A (zh)
WO (1) WO2022141826A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422762B (zh) * 2021-12-25 2023-10-13 深圳市幕工坊科技有限公司 投影幕动作控制系统
CN114245091B (zh) * 2022-01-27 2023-02-17 美的集团(上海)有限公司 投影位置修正方法、投影定位方法及控制装置、机器人
CN114979596B (zh) * 2022-05-27 2024-05-07 峰米(重庆)创新科技有限公司 投影画面控制方法、投影设备、计算机设备及存储介质

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105589293A (zh) * 2016-03-18 2016-05-18 严俊涛 全息投影方法及全息投影系统
WO2016145079A1 (en) * 2015-03-09 2016-09-15 Laser Projection Technologies, Inc. 3d laser projection, scanning and object tracking
CN206575538U (zh) * 2017-03-23 2017-10-20 广景视睿科技(深圳)有限公司 一种动向智能投影显示系统
CN108513117A (zh) * 2018-05-09 2018-09-07 北京邦邦共赢网络科技有限公司 一种基于全息投影的影像投射方法及装置
CN109996051A (zh) * 2017-12-31 2019-07-09 广景视睿科技(深圳)有限公司 一种投影区域自适应的动向投影方法、装置及系统
US20190279373A1 (en) * 2016-06-13 2019-09-12 International Business Machines Corporation Object tracking with a holographic projection
CN110769214A (zh) * 2018-08-20 2020-02-07 成都极米科技股份有限公司 基于帧差值的自动跟踪投影方法及装置
CN110930518A (zh) * 2019-08-29 2020-03-27 广景视睿科技(深圳)有限公司 基于增强现实技术的投影方法及投影设备
CN111031298A (zh) * 2019-11-12 2020-04-17 广景视睿科技(深圳)有限公司 控制投影模块投影的方法、装置和投影系统
US20200143558A1 (en) * 2016-03-07 2020-05-07 Bao Tran Extended reality system
CN210491076U (zh) * 2019-11-04 2020-05-08 深圳市创凌智联科技有限公司 一种自驱动控制路径的wifi投影仪

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860729A (zh) * 2010-04-16 2010-10-13 天津理工大学 一种用于全方位视觉的目标跟踪方法
CN102221887B (zh) * 2011-06-23 2016-05-04 康佳集团股份有限公司 互动投影系统及方法
CN104182095B (zh) * 2014-08-13 2017-06-30 长春理工大学 移动式自定位激光3d投影系统

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016145079A1 (en) * 2015-03-09 2016-09-15 Laser Projection Technologies, Inc. 3d laser projection, scanning and object tracking
US20200143558A1 (en) * 2016-03-07 2020-05-07 Bao Tran Extended reality system
CN105589293A (zh) * 2016-03-18 2016-05-18 严俊涛 全息投影方法及全息投影系统
US20190279373A1 (en) * 2016-06-13 2019-09-12 International Business Machines Corporation Object tracking with a holographic projection
CN206575538U (zh) * 2017-03-23 2017-10-20 广景视睿科技(深圳)有限公司 一种动向智能投影显示系统
CN109996051A (zh) * 2017-12-31 2019-07-09 广景视睿科技(深圳)有限公司 一种投影区域自适应的动向投影方法、装置及系统
CN108513117A (zh) * 2018-05-09 2018-09-07 北京邦邦共赢网络科技有限公司 一种基于全息投影的影像投射方法及装置
CN110769214A (zh) * 2018-08-20 2020-02-07 成都极米科技股份有限公司 基于帧差值的自动跟踪投影方法及装置
CN110930518A (zh) * 2019-08-29 2020-03-27 广景视睿科技(深圳)有限公司 基于增强现实技术的投影方法及投影设备
CN210491076U (zh) * 2019-11-04 2020-05-08 深圳市创凌智联科技有限公司 一种自驱动控制路径的wifi投影仪
CN111031298A (zh) * 2019-11-12 2020-04-17 广景视睿科技(深圳)有限公司 控制投影模块投影的方法、装置和投影系统

Also Published As

Publication number Publication date
CN112702587A (zh) 2021-04-23

Similar Documents

Publication Publication Date Title
WO2022141826A1 (zh) 一种智能跟踪投影方法及系统
US10165179B2 (en) Method, system, and computer program product for gamifying the process of obtaining panoramic images
US11677920B2 (en) Capturing and aligning panoramic image and depth data
US9723203B1 (en) Method, system, and computer program product for providing a target user interface for capturing panoramic images
US10122997B1 (en) Automated matrix photo framing using range camera input
JP6235022B2 (ja) 複数デバイスを使用した周辺環境の多次元データキャプチャ
US20230186434A1 (en) Defocus operations for a virtual display with focus and defocus determined based on camera settings
EP1316211A2 (en) Image projection apparatus
WO2018140656A1 (en) Capturing and aligning panoramic image and depth data
GB2456802A (en) Image capture and motion picture generation using both motion camera and scene scanning imaging systems
US20140247263A1 (en) Steerable display system
CN114245091B (zh) 投影位置修正方法、投影定位方法及控制装置、机器人
US20030025649A1 (en) Image projection apparatus
CN115299031A (zh) 自动对焦方法及其相机系统
WO2022057043A1 (zh) 一种目标跟踪动向投影方法和动向投影设备
US20160037148A1 (en) 3d-mapped video projection based on on-set camera positioning
US20230224576A1 (en) System for generating a three-dimensional scene of a physical environment
CN111064946A (zh) 基于室内场景的视频融合方法、系统、装置和存储介质
CN105807952A (zh) 一种信息处理方法及电子设备
Nicolescu et al. Segmentation, tracking and interpretation using panoramic video
Nicolescu et al. Electronic pan-tilt-zoom: a solution for intelligent room systems
Xu et al. Real-time keystone correction for hand-held projectors with an RGBD camera
US11942008B2 (en) Smart tracking-based projection method and system
US8054332B2 (en) Advanced input controller for multimedia processing
CN109309827A (zh) 用于360°悬浮光场三维显示系统的多人实时跟踪装置和方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21912596

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21912596

Country of ref document: EP

Kind code of ref document: A1