WO2023088303A1 - 投影设备及避障投影方法 - Google Patents

投影设备及避障投影方法 Download PDF

Info

Publication number
WO2023088303A1
WO2023088303A1 PCT/CN2022/132249 CN2022132249W WO2023088303A1 WO 2023088303 A1 WO2023088303 A1 WO 2023088303A1 CN 2022132249 W CN2022132249 W CN 2022132249W WO 2023088303 A1 WO2023088303 A1 WO 2023088303A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
image
obstacle
projection device
controller
Prior art date
Application number
PCT/CN2022/132249
Other languages
English (en)
French (fr)
Inventor
卢平光
王昊
王英俊
岳国华
唐高明
何营昊
郑晴晴
甄凌云
孙超
李彩凤
卢善好
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210590075.4A external-priority patent/CN114885141A/zh
Priority claimed from CN202210600617.1A external-priority patent/CN115002432B/zh
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Priority to CN202280063168.XA priority Critical patent/CN118020288A/zh
Publication of WO2023088303A1 publication Critical patent/WO2023088303A1/zh
Priority to US18/666,806 priority patent/US20240305754A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/317Convergence or focusing systems
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/315Modulator illumination systems
    • H04N9/3161Modulator illumination systems using laser light sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback

Definitions

  • the present application relates to the field of projection technology, in particular to a projection device and an obstacle avoidance projection method.
  • Projection equipment is based on imaging technology, which projects media data onto projection media such as walls, curtains, and screens, so that the projection media presents media data. Users can place the projector at a designated location or move the projection device to suit Projection location and orientation requirements.
  • the obstruction blocks the lens of the projection device, on the one hand, it will affect the projection display of media data.
  • the temperature of the light projected by the lens is high, which is very easy to It may cause the occluder to be burned by high temperature, especially if the occluder has a low ignition point, it may even cause a fire hazard; in addition, if the occluder blocks related components on the projection device, such as blocking the camera or distance sensor, it will also affect the adjustment of the projection device itself. Focus and correction, resulting in abnormal projection.
  • the projection device provided by the embodiment of the present application includes: a lens; an optical machine configured to project projection content to a projection surface; a distance sensor configured to detect a distance detection value between the projection surface and the optical machine; an image acquisition device configured to configured to capture images of projected content;
  • the controller is configured to: start the projection device in response to a power-on instruction of the projection device; according to the positional relationship between the lens, the distance sensor and the image acquisition device on the first plane, and the The distance detection value is used to detect whether there is an obstruction between the projection device and the projection surface; wherein, the first plane is a plane parallel to the projection surface on the projection device during projection; if it is detected that the projection device and the projection surface There is an occluder between the projection surfaces, and the control sends out prompt information for prompting to remove the occluder.
  • An embodiment of the present application provides an obstacle avoidance projection method applied to a projection device
  • the projection device includes an optical machine, a lens, a distance sensor, an image acquisition device, and a controller
  • the method includes: responding to a power-on command of the projection device, Start the projection device; according to the positional relationship between the lens, the distance sensor and the image acquisition device on the first plane, and the distance detection value of the distance sensor, detect whether there is a Blocking object; wherein, the first plane is a plane parallel to the projection surface on the projection device during projection, and the projection surface is used to receive and display the projection content of the optical mechanical projection; if the projection device detects There is an occluder between the projection surface, and the control sends out prompt information for prompting to remove the occluder.
  • FIG. 1 is a schematic diagram of a projection scene of a projection device in an embodiment of the present application
  • FIG. 2 is a schematic diagram of the optical path of the projection device in the embodiment of the present application.
  • FIG. 3 is a schematic diagram of a circuit architecture of a projection device in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the optical path of the projection device in the embodiment of the present application.
  • FIG. 5 is a schematic diagram of the circuit structure of the projection device in the embodiment of the present application:
  • FIG. 6 is a schematic diagram of the lens structure of the projection device in the embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a distance sensor and a camera of a projection device in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a system framework for realizing display control by a projection device in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a first plane structure of a projection device in an embodiment of the present application.
  • FIG. 10 is a schematic diagram of occlusion scene 1 in the embodiment of the present application.
  • FIG. 11 is a schematic diagram of occlusion scene 2 in the embodiment of the present application.
  • FIG. 12 is a schematic diagram of an occlusion scene three in the embodiment of the present application.
  • FIG. 13 is a schematic diagram of occlusion scene four in the embodiment of the present application.
  • FIG. 14 is a schematic diagram of the first plane structure of another projection device in the embodiment of the present application.
  • FIG. 15 is a schematic diagram of occlusion scene five in the embodiment of the present application.
  • Fig. 16 is a schematic diagram of occlusion scene six in the embodiment of the present application.
  • Fig. 17 is a schematic diagram of occlusion scene 7 in the embodiment of the present application.
  • FIG. 18 is a schematic diagram of the eighth and occlusion scene in the embodiment of the present application.
  • FIG. 19 is a flow chart of the first projection detection method in the embodiment of the present application.
  • FIG. 20 is a flow chart of the second projection detection method in the embodiment of the present application.
  • FIG. 21 is a flow chart of the third projection detection method in the embodiment of the present application.
  • Fig. 22 is a schematic diagram of the signaling interaction sequence of the projecting device according to the embodiment of the present application to realize the emissive eye function;
  • FIG. 23 is a schematic diagram of a signaling interaction sequence for realizing the display screen correction function of the projection device according to the embodiment of the present application.
  • FIG. 24 is a schematic flow diagram of the implementation of the autofocus algorithm by the projection device according to the embodiment of the present application.
  • FIG. 25 is a schematic flow diagram of the implementation of trapezoidal correction and obstacle avoidance algorithms by the projection device according to the embodiment of the present application.
  • FIG. 26 is a schematic flow diagram of the implementation of the screen-entry algorithm by the projection device according to the embodiment of the present application.
  • FIG. 27 is a schematic flow diagram of the projection device implementing the anti-eye algorithm according to the embodiment of the present application.
  • FIG. 28 is a schematic flow diagram of the projection device performing obstacle avoidance projection in the embodiment of the present application.
  • Fig. 29 is a schematic diagram of obstacle sets and outline levels in the embodiment of the present application.
  • FIG. 30 is a schematic flow chart of updating the obstacle contour coordinate set in the embodiment of the present application.
  • Fig. 31 is a schematic diagram of the change of the projection area in the embodiment of the present application.
  • Fig. 32 is a schematic flow chart of updating the obstacle contour coordinate set in the embodiment of the present application.
  • Fig. 33 is a schematic diagram of a rectangular grid and a non-obstacle area in the embodiment of the present application.
  • Fig. 34 is a schematic flow chart of recombining HSV images in the embodiment of the present application.
  • a projection device is a device that can project media data onto a projection medium.
  • the projection device can communicate with computers, radio and television networks, the Internet, VCD (Video Compact Disc: video high-density disc), DVD (Digital Versatile Disc Recordable: Digital Video Disc), game console, DV (Digital Video: digital video camera) and other devices to receive media data that needs to be projected.
  • VCD Video Compact Disc: video high-density disc
  • DVD Digital Versatile Disc Recordable: Digital Video Disc
  • game console Digital Video: digital video camera
  • DV Digital Video: digital video camera
  • the media data includes but not limited to images, videos, texts, etc.
  • the projection medium includes but not limited to physical forms such as walls, curtains, and screens.
  • FIG. 1 shows a schematic diagram of a projection scene of a projection device according to an embodiment of the present application
  • FIG. 2 shows a schematic diagram of an optical path of the projection device.
  • a projection device provided by the present application includes a projection medium 1 and a device 2 for projection.
  • the projection medium 1 is fixed on the first position, and the device 2 for projection is placed on the second position.
  • the device 2 for projection includes a projection assembly, which includes a laser light source 210 , an optical engine 220 and a lens 230 .
  • the laser light source 210 provides illumination for the optical machine 220 , the optical machine 220 modulates the light beam and outputs it to the lens 230 , the lens 230 performs imaging and projects it to the projection medium 1 , and the projection medium 1 presents a projection picture.
  • the laser light source 210 of the projection device 2 includes a laser component and an optical lens component.
  • the light beam emitted by the laser component can pass through the optical lens component to provide illumination for the optical machine 220 .
  • optical lens components require a higher level of environmental cleanliness and airtight level sealing; while the chamber where the laser component is installed can be sealed with a lower level of dustproof level to reduce sealing costs.
  • the light engine 220 of the projection device 2 may include a blue light engine, a green light engine, and a red light engine, and may also include a cooling system, a circuit control system, and the like.
  • the light emitting component of the projection device can also be realized by an LED light source.
  • FIG. 3 shows a schematic diagram of a circuit architecture of a projection device.
  • the device 2 for projection may include a display control circuit, a laser light source, at least one laser driving component, and at least one brightness sensor, and the laser light source may include at least one laser corresponding to the at least one laser driving component .
  • the at least one refers to one or more, and a plurality refers to two or more.
  • the projection device can realize adaptive adjustment. For example, by setting a brightness sensor in the light output path of the laser light source 210, the brightness sensor 260 can detect the first brightness value of the laser light source and send the first brightness value to the display control circuit.
  • the display control circuit can obtain the second brightness value corresponding to the driving current of each laser, and determine that the laser has COD when the difference between the second brightness value of the laser and the first brightness value of the laser is greater than the difference threshold.
  • the display control circuit can adjust the current control signal of the laser driver component corresponding to the laser until the aforementioned difference is less than or equal to the threshold, thereby eliminating the COD failure of the laser;
  • the COD failure of the laser reduces the damage rate of the laser and improves the image display effect of the projection equipment.
  • the laser light source 210 in the projection device 2 may include an optical assembly 310 and independently arranged blue lasers 211, red lasers 212, and green lasers 213.
  • the projection device may also be referred to as In the three-color projection device, the blue laser 211, the red laser 212, and the green laser 213 are all Mirai Console Loader (MCL) packaged lasers, which are small in size and facilitate the compact arrangement of optical paths.
  • MCL Mirai Console Loader
  • FIG. 5 shows a schematic diagram of a circuit structure of a projection device according to an embodiment of the present application.
  • the laser driving component may include a driving circuit 301 , a switching circuit 302 and an amplifying circuit 303 .
  • the driving circuit 301 may be a driving chip.
  • the switch circuit 302 may be a metal-oxide-semiconductor (MOS) transistor.
  • the driving circuit 301 is respectively connected with the switch circuit 302 , the amplification circuit 303 and the corresponding laser included in the laser light source 210 .
  • the driving circuit 301 is used to output the driving current to the corresponding laser in the laser light source 210 through the VOUT terminal based on the current control signal sent by the display control circuit, and transmit the received enabling signal to the switch circuit 302 through the ENOUT terminal.
  • the display control circuit is also used to determine the amplified driving voltage as the driving current of the laser, and obtain the second brightness value corresponding to the driving current.
  • the amplifying circuit 303 may include: a first operational amplifier A1, a first resistor (also known as a sampling power resistor) R1, a second resistor R2, a third resistor R3 and a fourth resistor R4.
  • the display control circuit is further configured to restore the current control signal of the laser driving component corresponding to the laser to
  • the initial value is the magnitude of the PWM current control signal to the laser in the normal state. Therefore, when a COD failure occurs in the laser, it can be quickly identified, and measures to reduce the driving current can be taken in time to reduce the continuous damage of the laser itself and help it recover itself. The whole process does not require dismantling and human intervention, which improves the laser light source. The reliability of use ensures the projection display quality of laser projection equipment.
  • the device 2 for projection includes a controller, and the controller is connected with relevant hardware of the projection device, such as a display control circuit, a brightness sensor, a distance sensor, an image acquisition device, etc., for controlling the projection
  • relevant hardware of the projection device such as a display control circuit, a brightness sensor, a distance sensor, an image acquisition device, etc.
  • the realization of functions such as projection, focusing, calibration, occlusion detection, occlusion reminder, and switch screen status adjustment of the equipment.
  • the body of the projection device can be provided with several types of interfaces, such as power interface, USB interface, HDMI (High Definition Multimedia Interface, high-definition multimedia interface) interface, network cable interface, VGA (Video Graphics Array, video Image array) interface, DVI (Digital Visual Interface, digital video interface), etc., to connect the signal source used to transmit the media.
  • interfaces such as power interface, USB interface, HDMI (High Definition Multimedia Interface, high-definition multimedia interface) interface, network cable interface, VGA (Video Graphics Array, video Image array) interface, DVI (Digital Visual Interface, digital video interface), etc.
  • the projection device after the projection device is started, it can directly enter the display interface of the signal source selected last time, or the signal source selection interface, wherein the signal source is, for example, a preset video-on-demand program, and can also be an HDMI interface, a USB interface, One of the signal sources such as live TV interface.
  • the device 2 for projection can acquire media data from the target signal source, and project the media data on the projection medium 1 for display.
  • the device 2 for projection can be configured with an image acquisition device for cooperating with the projection device to realize adjustment and control of the projection process.
  • the projection device may be configured with a camera such as a 3D camera, a monocular or a binocular camera; wherein, the camera may be used to capture images displayed on the projection surface, and may be a camera.
  • the camera can include a lens assembly, and a photosensitive element and a lens are arranged in the lens assembly. The lens refracts light through a plurality of mirrors, so that the light of the image of the scene can be irradiated on the photosensitive element.
  • the photosensitive element can choose the detection principle based on charge-coupled device or complementary metal oxide semiconductor according to the specifications of the camera, convert the optical signal into an electrical signal through the photosensitive material, and output the converted electrical signal into image data.
  • Fig. 6 shows a schematic diagram of the lens structure of the projection device 2 in some embodiments.
  • the lens 300 of the projection apparatus 2 may further include an optical assembly 310 and a driving motor 320 .
  • the optical component 310 is a lens group composed of one or more lenses, which can refract the light emitted by the optical machine 220, so that the light emitted by the optical machine 220 can be transmitted to the projection surface to form a transmitted content image.
  • the optical assembly 310 may include a lens barrel and a plurality of lenses disposed in the lens barrel. According to whether the position of the lens can be moved, the lens in the optical assembly 310 can be divided into a movable lens 311 and a fixed lens 312, by changing the position of the movable lens 311, adjusting the distance between the movable lens 311 and the fixed lens 312, changing the overall optical assembly 310 focal length. Therefore, the driving motor 320 can drive the moving lens 311 to move its position by connecting with the moving lens 311 in the optical assembly 310 to realize the auto-focus function.
  • the focusing process described in some embodiments of the present application refers to changing the position of the moving lens 311 by driving the motor 320, thereby adjusting the distance between the moving lens 311 and the fixed lens 312, that is, adjusting the position of the image plane , so the imaging principle of the lens combination in the optical assembly 310, the adjustment of the focal length is actually the adjustment of the image distance, but in terms of the overall structure of the optical assembly 310, adjusting the position of the moving lens 311 is equivalent to adjusting the overall focal length adjustment of the optical assembly 310 .
  • the lens of the projection device 2 needs to be adjusted to different focal lengths so as to transmit a clear image on the projection surface.
  • the distance between the projection device 2 and the projection surface will require different focal lengths depending on the placement position of the user. Therefore, in order to adapt to different usage scenarios, the device 2 for projection needs to adjust the focal length of the optical component 310 .
  • Fig. 7 shows a schematic structural diagram of a distance sensor and a camera in some embodiments.
  • the device 2 for projection may include an optical machine 220, a projection medium 1, and a distance sensor 600, and may also have a built-in or external camera 700, and the camera 700 may perform image capture on the screen projected by the device 2 for projection , to get the projected image.
  • the device 2 for projection determines whether the current lens focal length is appropriate by performing sharpness detection on the projected content image, and adjusts the focal length when it is not appropriate.
  • the projection device 2 can constantly adjust the position of the lens and take pictures, and find the focus position by comparing the sharpness of the pictures at the front and rear positions, so that the movement in the optical assembly 310
  • the lens 311 is adjusted to a proper position.
  • the controller 500 may first control the driving motor 320 to gradually move the moving lens 311 from the focus start position to the focus end position, and continuously acquire projected images through the camera 700 during this period. Then, by performing definition detection on multiple projected images, the position with the highest definition is determined, and finally the drive motor 320 is controlled to adjust the moving lens 311 from the focusing terminal to the position with the highest definition to complete automatic focusing.
  • the binocular camera When the binocular camera is installed on the projection device, according to the installation side position of the binocular camera on the first plane of the device, it specifically includes a left camera (first camera) and a right camera (second camera).
  • first camera first camera
  • second camera right camera
  • the image of the projection medium 1 presenting the projection screen can be collected at the same time; if at least one of the binocular cameras is blocked, there will be no projection content in the image captured by the blocked camera, for example If the occluder is red, the occluded camera may capture a pure red image.
  • the first plane is a plane parallel to and opposite to the projection plane of the projection medium 1 in the shell plane of the projection device 2 .
  • FIG. 8 illustrates a schematic diagram of a system framework for display control of a projection device, including an application service layer, a process communication framework, an operation layer, a framework layer, correction services, camera services, time-of-flight services, and hardware and its driver etc.
  • the application service layer is used to realize the interaction between the projection device and the user; based on the display of the user interface, the user can configure various parameters of the projection device and the display screen, and the controller coordinates and calls the algorithm services corresponding to various functions , which can realize the function of automatically correcting the display screen of the projection device when the display is abnormal.
  • the service layer can include correction service, camera service, time of flight (TOF: Time of Flight) service, etc., and the above service can focus on the application program service layer (APK Service) to realize the corresponding specific functions of different service configurations of the projection device; Layer-down docking algorithm library, camera, time-of-flight sensor and other data acquisition services, to realize the function of encapsulating the complex logic of the bottom layer and transmitting the business data to the corresponding service layer.
  • API Service application program service layer
  • the underlying algorithm library can provide correction services and control algorithms for various functions of the projection device.
  • the algorithm library can complete various mathematical operations based on OpenCV to provide basic capabilities for correction services.
  • OpenCV is a cross-platform computer vision and machine learning software library released based on BSD license (open source), which can run on operating systems such as Linux, Windows, Android and Mac OS.
  • the device 2 for projection has the characteristics of telephoto micro-projection, the controller controls the overall system architecture, and realizes the projection control of the projection equipment based on the underlying program logic, including but not limited to automatic keystone correction of the projection screen, automatic screen entry, automatic obstacle avoidance, Functions such as auto-focus, anti-eye, occlusion detection, occlusion reminder, switch screen control, etc.
  • the device 2 for projection is equipped with a gyro sensor.
  • the gyro sensor can sense the displacement of the projection device and actively collect position data; and then pass the collected position data through the framework layer It is sent to the application service layer to support user interface interaction and application data required in the process of application interaction.
  • the location data can also be used for data calls by the controller in the implementation of algorithm services.
  • the device 2 for projection is also equipped with a distance sensor for detecting the distance.
  • the distance sensor can be a time of flight (Time of Flight, TOF) sensor. After the time of flight sensor collects the distance data, it sends the distance data to the time of flight service; after the time of flight service obtains the distance data, it sends the collected distance data to the node through the process communication framework.
  • TOF Time of Flight
  • the distance data will be used for data calls of the controller, user interface, program application and other interactive use, and as one of the reference data for occlusion detection.
  • the device 2 for projection can also be configured with an image acquisition device, and the image acquisition device can use a binocular camera, a depth camera or a 3D camera, etc.; the image acquisition device sends the collected image data to the camera service, Then the camera service sends the image data to the process communication framework and/or correction service; the process communication framework sends the image data to the application service layer, and the image data will be used for data calls of the controller, user interface, program application, etc. , and as the second reference data for occlusion detection.
  • the image acquisition device can use a binocular camera, a depth camera or a 3D camera, etc.
  • the image acquisition device sends the collected image data to the camera service, Then the camera service sends the image data to the process communication framework and/or correction service
  • the process communication framework sends the image data to the application service layer, and the image data will be used for data calls of the controller, user interface, program application, etc. , and as the second reference data for occlusion detection.
  • data interaction is performed with the application service through the process communication framework, and then the projection correction parameters are fed back to the correction service through the process communication framework; the correction service sends the projection correction parameters to the operation layer of the projection device, and the operating system according to The projection correction parameters generate a correction instruction, and send a correction signal to the optical-mechanical control drive module, so that the optical-mechanical drive module adjusts the optical-mechanical working conditions according to the projection correction parameters, and completes the automatic correction of the projection screen.
  • the projection device may correct the projected picture.
  • the relationship between the distance, horizontal angle and offset angle can be created in advance, and then the controller of the projection device obtains the current distance from the optical machine to the projection medium 1, and combines the associated relationship to determine the current moment between the optical machine and the projection medium 1
  • the target angle can realize the projection screen correction.
  • the target included angle is specifically implemented as an included angle between the central axis of the optical machine and the projection surface of the projection medium 1 .
  • the focus can be re-adjusted, and the controller detects whether the automatic focus function is enabled; if the automatic focus function is not enabled, the controller will end the automatic focus operation; if the automatic focus function is enabled When it is turned on, the controller performs focusing calculations based on the distance detection value of the time-of-flight sensor.
  • the controller queries a preset mapping table according to the distance detection value of the time-of-flight sensor, and the preset mapping table records the mapping relationship between distance and focal length, so as to obtain the focal length of the projection device corresponding to the distance detection value; and then
  • the middleware sends the obtained focal length to the optical engine of the projection device; after the optical engine emits laser light according to the above focal length, at least one camera captures the projected content image, and the controller detects the definition of the projected content image to determine the current lens focal length Whether it is suitable, if the focal length is not suitable, it needs to be processed by focusing.
  • the projection device locates the focus position with the highest definition by adjusting the position of the lens and shooting, and comparing the change in the definition of the projected content image before and after the adjustment.
  • the control auto-focusing process ends; if the judgment result does not meet the preset completion conditions, the middleware will fine-tune the focal length parameters of the optical machine of the projection device, for example, gradually fine-tune the focus according to the preset step size, and The adjusted focal length parameters are set in the optical machine again, and through several steps such as taking pictures and evaluating the sharpness, and finally comparing the sharpness of the projected images, the optimal focal length is locked to complete the automatic focusing.
  • At least a lens, a distance sensor and an image acquisition device are arranged on the first plane of the projection device, and the image acquisition device may include one or more cameras.
  • the first plane is a plane parallel to and opposite to the projection plane on the device 2 used for projection during projection.
  • a first measurement component and a second measurement component 250 are arranged on the first plane, the first measurement component includes a first camera 241, and the second measurement component 250 includes A second camera 251 and a distance sensor 252 .
  • the first measuring component is set at the first position of the first plane (corresponding to the left area of the first plane)
  • the second measuring component 250 is set at the second position of the first plane (corresponding to the right area of the first plane)
  • the second A camera 241 and a second camera 251 are included in the image acquisition device, and there should be a certain distance between the first camera 241 and the second camera 251 .
  • Which side of the first plane the first measurement component and the second measurement component 250 are located on does not affect the essence of this solution.
  • the lens 230 and the first camera 241 in the projection assembly are on the same side on the first plane, that is, both are located on the left side of the first plane, and between the lens 230 and the first camera 241
  • the spacing is small and the positional relationship is compact.
  • the distance between the second camera 251 and the distance sensor 252 is small and the location is compact.
  • the center points of the lens 230 , the first camera 241 , the second camera 251 and the distance sensor 252 may be set at the same height.
  • the distance sensor 252 may use a TOF sensor, or other sensors for detecting distance; the first camera 241 and the second camera 251 may use a 3D camera, a depth camera, and the like.
  • the following four occlusion scenarios may be included:
  • Occlusion scene one Referring to the example in Figure 10, the occlusion is relatively small, and the occlusion range only covers the left area of the first plane, that is, only the lens 230 and the first camera 241 are blocked, while the second camera 251 and the distance sensor on the right 252 are unoccluded.
  • Occlusion scene two Referring to the example in FIG. 11 , the occlusion object is relatively small, and the occlusion range only covers the right area of the first plane, that is, only the second camera 251 and the distance sensor 252 are blocked, while the left lens 230 and the first camera 241 is unoccluded.
  • Blocking scene three Referring to the example in FIG. 12 , the blocking object is relatively large, and the blocking range covers the left and right sides of the first plane, that is, the lens 230 , the first camera 241 , the second camera 251 and the distance sensor 252 are all blocked.
  • Blocking scenario four Referring to the example in FIG. 13 , there is no blocking object in front of the projection device, that is, the lens 230 , the first camera 241 , the second camera 251 and the distance sensor 252 are not blocked.
  • the lens 230 since the lens 230 is blocked, on the one hand, it will affect the projection display of media data; When the ignition point of the object is low, it may even cause a fire hazard, which poses a safety hazard; for the above-mentioned occlusion scenario 2, although the lens 230 is occluded, the distance sensor 252 is occluded, which will cause the projection device to be unable to self-correct accurately. It can be seen that it is very necessary to detect the occluder on the projection device and prompt the user to remove the occluder in time.
  • the distance sensor 252 when the controller receives a power-on broadcast or a standby broadcast, wherein the standby broadcast includes a STR (Suspend to RAM, suspend to memory) broadcast, the distance sensor 252 is used to perform distance detection.
  • the signal emitted by the distance sensor 252 will be reflected back when it encounters an obstruction midway, resulting in a small distance detection value; if the occlusion scenario four is satisfied, the signal emitted by the distance sensor 252 will be reflected back by the projection medium 1,
  • the detected distance value in this scene is equal to the projection distance L, which is the separation distance between the projection surface and the optical machine.
  • the controller acquires the distance detection value of the distance sensor 252 , it compares the distance detection value with a preset distance threshold. If the distance detection value is less than or equal to the distance threshold value, it may meet the occlusion scenario two, that is, the lens 230 is not occluded, and the distance sensor 252 is occluded.
  • the projection device needs to prompt to remove the occluder; or, if the distance detection value is less than or equal to the distance threshold, it may meet the occlusion scenario three, that is, both the lens 230 and the distance sensor 252 are blocked, not only will It will affect the automatic calibration of the projection equipment and cause the occluder to be burned by the high-temperature projection light, so it is also necessary to prompt to remove the occluder.
  • 0 ⁇ distance threshold ⁇ L L represents the distance between the lens 230 and the projection surface of the projection medium 1, that is, the projection distance, and the distance threshold can be set based on the safe distance between the lens and the blocking object, that is, to avoid the lens
  • the controller controls the projection device to prompt to remove the occluder, and records that the projection device is in the state of the occluder.
  • the projection device can be configured with a voice player 260, and a prompt message is broadcast through the voice player 260, the prompt message is, for example, "please remove the obstruction in front of the projection device as soon as possible"; or , the projection device controls the projection screen of the projection medium 1 to display text information for prompting to remove the occluder; or, the projection device pushes prompt information to the associated electronic device that can communicate, and the electronic device is, for example, a smart phone or a tablet computer , computer, etc.
  • the manner in which the projection device prompts for removing the barrier is not limited to the examples in this application.
  • an occlusion state flag bit can be set in the system of the projection device, and the occlusion state flag bit is used to record and indicate the occlusion state of the projection device, and the occlusion state includes a no-occlusion state and an occlusion state, wherein , the state with an obstruction is used to indicate that there is an obstruction between the projection device and the projection surface, and the state without an obstruction is used to indicate that there is no obstruction between the projection device and the projection surface. For example, if the state value of the occlusion state flag is set to 0, it represents a state without an occluder, and if the state value of the occlusion state flag is set to 1, it represents a state with an occlusion.
  • the occlusion state flag is recorded as the state of an occlusion, and the projection device prompts to remove the occlusion. After the user removes the occlusion, the projection device detects If there is no occluder before (that is, occlusion scenario 4 is satisfied), the state value of the occlusion state flag is updated to change the occlusion state to the state of no occlusion. When the projection device detects the presence of an occlusion or the removal of the occlusion, it needs to synchronously change the state value recorded in the occlusion state flag.
  • the controller can activate the first camera 241 and the second camera 241.
  • the camera 251 acquires the first image acquired by the first camera 241, acquires the second image acquired by the second camera 251, and compares the first image with the second image.
  • the second camera 251 and the distance sensor 252 on the right are blocked, while the lens 230 and the first camera 241 on the left are not blocked, so the first image is a normally collected projected screen image, because the second camera
  • the distance between the lens of 251 and the occluder is relatively close, which causes the image refracted by the lens of the second camera 251 to be different from the first image.
  • the occluder is black
  • the second image appears as a pure black image. In this scenario, the first image The similarity to the second image is low.
  • the first shot taken in this scene The similarity between the first image and the second image is high.
  • the controller when the distance detection value is less than or equal to the distance threshold, the controller further calculates the similarity between the first image and the second image, and compares the similarity with a preset similarity threshold. If the similarity is greater than or equal to the similarity threshold, it is considered that the similarity between the first image and the second image is high, and it is determined that the current occlusion scene is three; if the similarity is less than the similarity threshold, it is considered that the first image and the second image The similarity is low, and it is determined that the current scene is occluded scene 2.
  • the controller determines that the occlusion scene 2 is generated, since the lens 230 is not occluded in this scene, that is, there is no safety risk of high-temperature light burning the occlusion, only the projection device caused by the occlusion of the distance sensor 252 exists. The problem cannot be corrected precisely, so it is only necessary to control the projection device to prompt to remove the occlusion, and make the occlusion status flag record the state of the occlusion.
  • the controller determines that an occlusion scene three occurs, the lens 230 and the distance sensor 252 are both blocked in this scene, which not only affects the automatic calibration of the projection device, but also the high-temperature light emitted by the lens 230 will burn the occluder, so
  • the controller controls the projection device to prompt to remove the occluder, makes the occlusion state flag record the state of the occluder, and performs a screen-off protection operation on the projection device, that is, controls the projection component to suspend projecting media data to the projection medium 1, so that the lens 230 does not
  • the projected light is then emitted to prevent the projected light from burning the occluder before the occluder is removed.
  • the controller may control the distance sensor 252 to measure a distance detection value at preset intervals, and immediately turn off the distance sensor 252 after the detection is completed, so as to reduce system resource consumption of the projection device. Based on the previous embodiment, if the controller detects that there is no occluder in front of the projection device, that is, the user has removed the occluder, it will perform an operation to open the screen of the projection device, that is, restore the projection component to project media data to the projection medium 1, and The record of the occlusion state flag is changed to the state of no occlusion.
  • the occluder only covers the first position of the first plane (the left side of the first plane in the second viewing angle of FIG. 10 ), that is, the lens 230 is blocked, but the distance sensor 252 is not blocked.
  • the signal emitted by the distance sensor 252 is transmitted back after reaching the projection medium 1, causing the distance detection value measured by the distance sensor 252 to be greater than the distance threshold; for the occluded scene 4, there is no occluder in front of the projection device, and the lens 230 and the distance sensor will not be blocked.
  • 252 also make the signal emitted by the distance sensor 252 be emitted back after reaching the projection medium 1 instead of being emitted by an obstructing object halfway, so the distance detection value in this scene is greater than the distance threshold.
  • the controller can activate the first camera 241 and the second camera 251 , acquire the first image captured by the first camera 241, acquire the second image captured by the second camera 251, and compare the first image and the second image.
  • the lens 230 and the first camera 241 on the left are blocked, while the second camera 251 and the distance sensor 252 on the right are not blocked, so the second image is a normally collected projected screen image, because the first camera
  • the distance between the lens of 241 and the blocking object is relatively close, which causes the picture refracted by the lens of the first camera 241 to be different from the second image.
  • the blocking object is white
  • the second image appears as a pure white image.
  • the first image The similarity to the second image is low.
  • the lens 230, the first camera 241, the second camera 251 and the distance sensor 252 are not blocked, and the first camera 241 and the second camera 251 collect the same projection
  • the image of the projection screen presented on the medium 1 therefore, the similarity between the first image and the second image is relatively high.
  • the controller when the distance detection value is greater than the distance threshold, the controller further calculates the similarity between the first image and the second image, and compares the similarity with a preset similarity threshold. If the similarity is greater than or equal to the similarity threshold, it is considered that the similarity between the first image and the second image is relatively high, and it is determined that the current occlusion scene is four; if the similarity is less than the similarity threshold, it is considered that the first image and the second image are similar The similarity is low, and it is determined that the current scene is occlusion scene one.
  • the controller determines that an occluded scene 4 occurs, since there is no occluded object in front of the projection device in this scene, there is no problem of burning the occluded object by the projected light, and there is no occluded object that interferes with the automatic correction of the projection device.
  • the projection device does not need to prompt information about removing the occlusion, so that the occlusion state flag records the state of no occlusion, and the projection device can operate normally.
  • the controller determines that an occlusion scene 1 occurs, in which the lens 230 is blocked, the high-temperature projection light emitted by the lens 230 may also burn the occlusion, so the controller controls the projection device to prompt to remove the occlusion, Record the state of the occluder in the occlusion state flag, and perform the screen-off protection operation on the projection device, that is, control the projection component to suspend the projection of media data to the projection medium 1, so that the lens 230 stops emitting projection light, thereby avoiding the problem before removing the occlusion.
  • the projected light burns the occlusion.
  • the controller detects that there is no occluder in front of the projection device, that is, when it is determined that the occlusion scene 4 is satisfied, it queries the state value recorded in the occlusion state flag bit; if the occlusion state flag bit currently indicates that there is an occluder state, and the projection If the device is in the off-screen protection state, it means that the user removes the occluder in Blocking Scene 1 or Blocking Scene 3, and then executes the screen-on operation of the projection device, so that the projection component resumes projecting media data to the projection medium 1, and the blocking
  • the record of the status flag is changed to the state of no obstruction; if the obstruction status flag currently indicates the state of no obstruction, and the projection device is in the open state, it means that there is no obstruction in front of the projection device, and the normal operation of the projection device is maintained ; If the occlusion status flag currently indicates that there is an occlusion, and the projection device is in the open state, it indicates that the user removed the occlusion
  • algorithms such as cosine distance and statistical histogram can be used to compare the similarity between the first image and the second image.
  • the cosine distance is to compare the similarity between the first image and the second image
  • the statistical histogram algorithm is to analyze image features, so as to determine which or which image acquisition devices are blocked.
  • FIG. 14 illustrates the first plane structure of another projection device, and a first measurement component and a second measurement component 250 are arranged on the first plane, and the first measurement component includes a first camera 241, and a second measurement component 250.
  • the measurement component 250 includes a second camera 251 and a distance sensor 252.
  • the difference from FIG. 9 is that the lens 230 in the projection component is on the same side as the second measurement component 250 on the first plane, that is, the lens 230, the second camera 251 and The distance sensors 252 are all located at the second position (the right area) of the first plane and are close to each other, and the first camera 241 is independently located at the first position (the left area) of the first plane. Which side of the first plane the first measurement component and the second measurement component 250 are located on does not affect the essence of this solution.
  • the following four occlusion scenarios may be included:
  • Occlusion scene five Referring to the example in Figure 15, the occlusion only covers the right side of the first plane, that is, the lens 230, the second camera 251 and the distance sensor 252 are all occluded, the first camera 241 is not occluded, and the distance detection value is less than or equal to the distance threshold.
  • the controller controls the projection device to prompt to remove the occluder, so that the occlusion state flag records the state of the occluder , and perform a screen-off protection operation for the projection device, that is, control the projection component to suspend projecting media data to the projection medium 1, so that the lens 230 stops emitting projection light, thereby preventing the projection light from burning the occluder before the occluder is removed.
  • Occlusion scene six Referring to the example in Figure 16, the occlusion only covers the right side of the first plane, that is, the lens 230, the second camera 251 and the distance sensor 252 are all occluded, and the first camera 241 is not occluded, assuming that the distance detection value is d, the scene satisfies the distance threshold ⁇ d ⁇ projection distance L, since the distance between the lens 230 and the occluder exceeds the distance threshold, there is no risk of the occluder being burned by the high-temperature projection light, but the occluder blocks the distance sensor 252, which will cause projection
  • the device cannot automatically calibrate and focus, so the controller controls the projection device to prompt to remove the occluder, so that the occlusion state flag records the state of the occluder, but the projection device can still be kept in the open state.
  • the projection distance L is kept at a fixed value without moving the projection device 2 and the projection medium 1 .
  • the lens 230, the distance sensor 252 and the second camera 251 on the right side of the first plane are not blocked, which does not affect the automatic calibration and focusing of the projection equipment, and does not cause high-temperature burns of the projected light Therefore, when the distance detection value d is equal to the projection distance L, it is considered that there is no obstruction in front of the projection device, and the controller does not need to prompt to remove the obstruction, so that the obstruction state flag records the state of no obstruction.
  • the projection device illustrated in Figure 14 is a binocular camera structure, as a modification of Figure 14, the projection device can also be a monocular camera structure, such as canceling the first camera 241, so that the first plane is only provided with a single measurement component, the measurement component It includes an image acquisition device and a distance sensor.
  • the measurement component can be arranged at any position on the first plane.
  • the lens 230 is adjacent to the position of the measurement component. I won't repeat them here.
  • its obstruction detection mechanism mainly relies on the distance detection value measured by the distance sensor 252 , without performing similarity analysis on the images collected by the dual cameras. For occlusion scene five or occlusion scene six, as long as the distance detection value d is less than the projection distance L, a prompt to remove the occluder can be implemented, and the projection device can be protected by turning off the screen.
  • Figure 10, Figure 11, Figure 12, Figure 13, Figure 15, Figure 16, Figure 17 and Figure 18 present the first viewing angle
  • Figure 9 and Figure 14 present the second viewing angle
  • the first viewing angle and the second viewing angle The two angles of view are opposite, and the sides corresponding to the first position and the second position on the first plane are opposite under the two angles of view, wherein the first angle of view is the actual angle of view when the projection device is used, so this application is based on the first
  • a viewing angle is used to define the relative direction of the first position and the second position on the first plane.
  • FIG. 19 provides a first projection detection method, the method is executed by a controller of a projection device, and includes the following steps:
  • Step S1901 in response to a power-on command of the projection device, start the projection device.
  • Step S1902 according to the positional relationship of the lens on the projection device, the distance sensor and the image acquisition device on the first plane, and the distance detection value of the distance sensor, detect whether there is an obstruction between the projection device and the projection surface.
  • the first plane is a plane parallel to the projection surface on the projection device during projection, and the projection surface is used to receive and display the projection content of the optical mechanical projection.
  • Step S1903 if it is detected that there is a blocking object between the projection device and the projection surface, control to send out a prompt message for prompting to remove the blocking object.
  • the distribution and positional relationship of the lens 230, the distance sensor 252, and the image acquisition device on the first plane of the projection device are not limited, nor is it limited that the image acquisition device includes one or more cameras, so based on the first plane of the projection device
  • the positional relationship of related components on a plane is combined with the distance detection value of the distance sensor 252 to detect the blocking state of the projection device, and when there is an blocking object, the user is prompted to remove the blocking object in time to avoid the blocking object being burned by the projection light , and the abnormal operation of the projection equipment caused by the blocking of important components, while ensuring safety, the display effect of the projected content is improved.
  • This embodiment is based on the structural characteristics of the projection device, to realize the detection of obstructions and provide countermeasures.
  • FIG. 20 provides a second projection detection method, which is executed by a controller of the projection device, and includes the following steps:
  • Step S2001 in response to a power-on command of the projection device, start the projection device.
  • Step S2002 controlling the distance sensor to calculate a distance detection value.
  • Step S2003 judging whether the distance detection value is greater than the distance threshold. If the distance detection value is less than or equal to the distance threshold, execute step S2004; otherwise, execute steps S2006 to S2008.
  • Step S2004 judging whether the projection device is currently in a state of having an obstruction. If the projection device is currently in the state of no occlusion, that is, the front of the projection device changes from no occlusion to an occlusion, then perform step S2005; Step S2002, measure the distance detection value periodically to detect whether the blocking object is removed.
  • Step S2005 recording that the projection device is in the state of having an obstruction, prompting to remove the obstruction, and controlling the optical machine to stop projecting the projection content to the projection surface.
  • Step S2006 controlling the first camera to capture the first image and controlling the second camera to capture the second image.
  • Step S2007 calculating the similarity between the first image and the second image.
  • Step S2008 judging whether the similarity is smaller than a similarity threshold. If the similarity between the first image and the second image is less than the similarity threshold, it means that there is an occluder, then step S2004 is executed; if the similarity between the first image and the second image is greater than or equal to the similarity threshold, it means that there is no occluder, Then execute step S2009.
  • Step S2009 judging whether the projection device is currently in a state of no obstruction. If the projection device is currently in the state of having an obstruction, it means that the user has removed the obstruction, then perform step S2010; if the projection device is currently in the state of no obstruction, that is, there is no obstruction in front of the projection device, perform step S2002, and measure the distance regularly Detection value to detect if an occluder is present.
  • Step S2010 record that the projection device is in the state of no obstruction, and control the optical machine to resume projecting the projection content to the projection surface.
  • the combination of the distance sensor and the dual image acquisition device is used to identify the type of the occluded scene. If there is an occluded object, the projection device prompts the user to remove the occluded object in time, and performs screen-off protection for the projection device, improving The safety of the projection equipment, avoiding the occlusion from being burned by the projection light, avoiding the influence of the occlusion on the display of the projection screen, and avoiding the abnormal problem that the projection equipment cannot be automatically corrected and focused when the key components such as the distance sensor are blocked ; If the user removes the blocking object after receiving the prompt, the projection device will detect that the blocking object does not exist, and thus turn on the screen and resume projecting media data to the projection medium.
  • FIG. 21 provides a third projection detection method, which is executed by a controller of the projection device, and includes the following steps:
  • Step S2101 in response to a power-on command of the projection device, start the projection device.
  • Step S2102 controlling the distance sensor to calculate a distance detection value.
  • Step S2103 judging whether the detected distance value is greater than a distance threshold. If the distance detection value is greater than the distance threshold, execute step S2104; if the distance detection value is less than or equal to the distance threshold, otherwise execute steps S2107 to S2109.
  • Step S2104 judging whether the detected distance value is equal to the projection distance.
  • the projection distance is the distance between the lens/first plane and the projection medium. If the distance detection value is greater than the distance threshold and the distance detection value is smaller than the projection distance, then perform step S2105; if the distance detection value is equal to the projection distance, then perform step S2105. S2106.
  • step S2105 it is determined that the projection device is in a state of having an obstruction, and prompting to remove the obstruction.
  • step S2106 it is determined that the projection device is in the state of no obstruction, and no prompt is given to remove the obstruction.
  • step S2107 it is determined that the projection device is in the state of having an obstruction, prompting to remove the obstruction, and controlling the optical machine to stop projecting the projection content to the projection surface.
  • Step S2108 re-detecting the blocking state of the projection device every preset time interval.
  • Step S2109 if it is detected that the blocking state of the projection device is changed to the state of no blocking object, control the optical machine to project the projection content to the projection surface.
  • the projection device structure shown in Figure 14 it is divided into three distance intervals according to the distance threshold and projection distance.
  • the interval A corresponding to (0, distance threshold]
  • the projected light is easy to burn because the occluder is relatively close to the lens.
  • Obstructions so when the distance detection value is in the interval A, not only the prompt to remove the obstructions, but also the screen-off protection for the projection device; in the interval B corresponding to (distance threshold, projection distance), due to occlusion
  • the object is within a safe range and will not be burned by the projection light, so you only need to be reminded to remove the occluder without turning off the screen; when the distance detection value is equal to the projection distance, the projection light can directly reach the projection medium, so that the projection medium displays the projected medium During the data, no occluder will be encountered, so there is no need to prompt to remove the occluder.
  • This embodiment can detect whether there is an occluder according to the interval of the distance detection value, and determine the response measures to the
  • the position distribution characteristics of the lens 230, the distance sensor and at least one image acquisition device on the first plane can be set, and the corresponding obstruction detection and response mechanism can be configured, so it is not limited to the examples of this application way of realization.
  • the software and hardware configuration and functions of the projection device are not limited.
  • the solution of this application is applicable to different types of projection devices, including projectors with telephoto micro-projection characteristics.
  • the projection medium mentioned in this application refers to the carrier that is projected and used to display the projection screen, such as a wall, a fixed or movable screen, or an electronic device with display capabilities, such as a computer.
  • the projection device and the obstacle avoidance projection detection method provided by the present application can also realize the anti-eye function.
  • the anti-eye switch is turned on and the user is reminded to leave the current area.
  • the controller can also control the user interface to reduce the display brightness, so as to prevent the laser from damaging the eyesight of the user.
  • FIG. 22 shows a schematic diagram of a signaling interaction sequence of a projection device implementing a radioactive eye function according to another embodiment of the present application.
  • the controller when the projection device is configured as a children's viewing mode, the controller will automatically turn on the anti-eye switch.
  • the controller controls the projection device to turn on the anti-eye switch.
  • the controller when the data collected by time-of-flight (TOF) sensors, camera devices and other devices triggers any preset threshold condition, the controller will control the user interface to reduce the display brightness, display prompt information, and reduce the optical-mechanical transmission power. , brightness, intensity, in order to protect the user's eyesight.
  • TOF time-of-flight
  • the projection device controller can control the calibration service to send signaling to the time-of-flight sensor, step S2201, query the current device status of the projection device, and then the controller receives data feedback from the time-of-flight sensor.
  • Step S2202 the correction service can send a notification algorithm service to the process communication framework (HSP Core) to start the anti-eye shot process signaling;
  • HSP Core process communication framework
  • Step S2203 the process communication framework (HSP Core) will call the service capability from the algorithm library to call the corresponding algorithm service, for example, it can include the camera detection algorithm, the screenshot algorithm, and the foreign object detection algorithm, etc.;
  • Step S2204 the process communication framework (HSP Core) returns the foreign object detection result to the correction service based on the above algorithm service; for the returned result, if the preset threshold condition is reached, the controller will control the user interface to display prompt information, reduce the display brightness, and its signaling
  • the timing is shown in Figure 22.
  • the projection device when the anti-eye switch of the projection device is turned on, when the user enters a preset specific area, the projection device will automatically reduce the intensity of the laser light emitted by the optical machine, reduce the display brightness of the user interface, and display safety prompt information.
  • the control of the projection device on the above-mentioned anti-eye function can be realized by the following methods:
  • the controller Based on the projection screen acquired by the camera, the controller uses an edge detection algorithm to identify the projection area of the projection device; when the projection area is displayed as a rectangle or a rectangle, the controller obtains the coordinate values of the four vertices of the above-mentioned rectangular projection area through a preset algorithm;
  • the perspective transformation method can be used to correct the projection area to be a rectangle, and the difference between the rectangle and the projection screenshot can be calculated to realize whether there are foreign objects in the display area; if the judgment result is that there are foreign objects, the projection device Automatically trigger the anti-eye function to start.
  • the difference between the camera content of the current frame and the camera content of the previous frame can be used to determine whether foreign objects have entered the area outside the projection range; if it is judged that foreign objects have entered, the projection
  • the device automatically triggers the anti-eye function.
  • the projection device can also use a time-of-flight (ToF) camera or a time-of-flight sensor to detect real-time depth changes in a specific area; if the depth value changes beyond a preset threshold, the projection device will automatically trigger the anti-eye function.
  • TOF time-of-flight
  • FIG. 27 a schematic flow diagram of the projection device implementing the anti-eye algorithm is given.
  • the projection device judges whether the anti-eye function needs to be enabled based on the collected time-of-flight data, screenshot data, and camera data analysis.
  • the projection device will automatically activate the anti-eye function to reduce the intensity of the laser light emitted by the light machine, reduce the display brightness of the user interface, and display safety reminder information.
  • the projection device performs color addition mode (RGB) difference analysis according to the collected screenshot data. If the color addition mode difference is greater than the preset threshold Y, it can be determined that there is a foreign object in a specific area of the projection device; within the specific area If there is a user whose vision is at risk of being damaged by the laser, the projection device will automatically activate the anti-eye function, reduce the intensity of the emitted laser light, reduce the display brightness of the user interface, and display the corresponding safety reminder information.
  • RGB color addition mode
  • S2702-2 Acquire projection coordinates according to the collected camera data. If the acquired projection coordinates are in the projection area, perform S2701-3. If the acquired projection coordinates are in the extended area, still perform S2701-3;
  • the projection device obtains the projection coordinates according to the collected camera data, and then determines the projection area of the projection device according to the projection coordinates, and further analyzes the difference of the color addition mode (RGB) in the projection area, if the difference of the color addition mode is greater than the preset threshold Y, it can be determined that there is a foreign object in a specific area of the projection device. If there is a user in the specific area, his vision may be damaged by the laser.
  • the interface displays brightness and displays the corresponding safety prompt information.
  • the controller can still perform color additive mode (RGB) difference analysis in the extended area; if the color additive mode difference is greater than the preset threshold Y, it can be determined that there is a foreign object in the projection device If there is a user in the specific area, his vision may be damaged by the laser light emitted by the projection device.
  • the projection device will automatically activate the anti-eye function, reduce the intensity of the emitted laser light, reduce the brightness of the user interface display, and display the corresponding security information.
  • the prompt information is shown in Figure 27.
  • FIG. 23 shows a schematic diagram of a signaling interaction sequence of a projection device implementing a display image correction function according to another embodiment of the present application.
  • the projection device can monitor the movement of the device through a gyroscope or a gyroscope sensor.
  • Step S2301 the calibration service sends a signaling to the gyroscope to query the status of the device, and receives the signaling fed back by the gyroscope to determine whether the device moves.
  • the display correction strategy of the projection device can be configured such that when the gyroscope and the time-of-flight sensor change simultaneously, the projection device triggers keystone correction first; after the gyroscope data stabilizes for a preset length of time, step S2302, notify The algorithm service starts the keystone correction process; the controller starts and triggers the keystone correction; and the controller can also configure the projection device not to respond to the commands issued by the remote control buttons when the keystone correction is in progress; in order to cooperate with the realization of the keystone correction, the projection device will play pure White illustration card.
  • the trapezoidal correction algorithm can construct the transformation matrix between the projection surface and the optical-mechanical coordinate system in the world coordinate system based on the binocular camera; further combine the optical-mechanical internal parameters to calculate the homography between the projection screen and the playing card, and use the homography to realize Arbitrary shape conversion between the projected screen and the playing card.
  • the correction service sends a signaling for informing the algorithm service to start the keystone correction process to the process communication framework (HSP CORE), and the process communication framework further sends a service capability call signaling to the algorithm service to obtain the capability corresponding algorithm;
  • the algorithm service obtains and executes the camera and picture algorithm processing service and the obstacle avoidance algorithm service, and sends them to the process communication framework in the form of signaling; in some embodiments, the process communication framework executes the above algorithms and feeds back the execution results to the Calibration service, the execution results may include successful photographing and successful obstacle avoidance.
  • the user interface will be controlled to display an error return prompt, and the user interface will be controlled to display keystone correction and auto focus charts again.
  • the projection device can identify the screen; and use the projection changes to correct the projection screen to be displayed inside the screen, so as to achieve the effect of aligning with the edge of the screen.
  • the projection device can use the time-of-flight (ToF) sensor to obtain the distance between the optical machine and the projection surface, based on the distance, find the best image distance in the preset mapping table, and use the image algorithm to evaluate the clarity of the projection screen. Based on this, the image distance can be fine-tuned.
  • ToF time-of-flight
  • the automatic keystone correction signaling sent by the correction service to the process communication framework may include other function configuration instructions, for example, it may include control instructions such as whether to implement synchronous obstacle avoidance, whether to enter a scene, or not.
  • the process communication framework sends the service capability call signaling to the algorithm service, so that the algorithm service acquires and executes the auto-focus algorithm to realize the adjustment of the line-of-sight between the device and the screen; in some embodiments, after applying the auto-focus algorithm to realize the corresponding function, the algorithm
  • the service may also obtain and execute an automatic entry algorithm, which may include a keystone correction algorithm.
  • the projection device automatically enters the screen, and the algorithm service can set the 8-position coordinates between the projection device and the screen; and then through the autofocus algorithm again, the adjustment of the viewing distance between the projection device and the screen is realized; finally, the correction result Feedback to the correction service, step S2304, control the user interface to display the correction result, as shown in FIG. 23 .
  • the projection device uses an autofocus algorithm to obtain the current object distance by using its configured laser ranging to calculate the initial focal length and search range; then the projection device drives the camera (Camera) to take pictures, and uses the corresponding algorithm Perform clarity evaluation.
  • the projection device searches for the best possible focal length based on the search algorithm, then repeats the above steps of photographing and sharpness evaluation, and finally finds the optimal focal length through sharpness comparison to complete autofocus.
  • the controller will detect whether the auto-focus function is enabled, if not, that is, when the auto-focus function is not enabled, the controller will end the auto-focus business, and if so, execute S2403;
  • the middleware obtains the detection distance of time of flight (TOF);
  • the projection device When the auto-focus function is turned on, the projection device will obtain the detection distance of the time-of-flight (TOF) sensor through the middleware for calculation;
  • TOF time-of-flight
  • the middleware sets the focal length to the optical machine; the controller queries the preset mapping table according to the obtained distance to obtain the approximate focal length of the projection device; then the middleware sets the acquired focal length to the optical machine of the projection device.
  • the camera After the optical machine emits laser light with the above focal length, the camera will execute the photographing command; the controller judges whether the projection device is focused according to the obtained photographing result and evaluation function; if the judgment result meets the preset completion conditions, the control autofocus process ends;
  • the middleware will fine-tune the focal length parameters of the optical machine of the projection equipment, for example, the preset step size can be used to gradually fine-tune the focal length, and the adjusted focal length parameters will be set to the optical machine again; thus realizing repeated photos, clear Finally, the optimal focal length is found through sharpness comparison to complete autofocus, as shown in Figure 24.
  • the projection device provided by the present application can implement a display correction function through a keystone correction algorithm.
  • two sets of external parameters between the two cameras and between the camera and the optical machine can be obtained, that is, the rotation and translation matrices; then the specific checkerboard chart is played through the optical machine of the projection device, and the projected checkerboard angle is calculated
  • Point depth value for example, solve the xyz coordinate value through the translation relationship between binocular cameras and the principle of similar triangles; then fit the projection surface based on the xyz, and obtain the rotation relationship and translation relationship with the camera coordinate system , which can specifically include pitch relationship (Pitch) and yaw relationship (Yaw).
  • the Roll parameter value can be obtained through the gyroscope configured by the projection device to combine the complete rotation matrix, and finally calculate the external parameters from the projection plane to the optical-mechanical coordinate system in the world coordinate system.
  • the flow diagram of the projection device implementing trapezoidal correction and obstacle avoidance algorithm includes the following steps: S2501, the projection device controller obtains the depth value of the point corresponding to the pixel point of the photo, or the coordinates of the projection point in the camera coordinate system ;
  • the middleware obtains the relationship between the optical machine coordinate system and the camera coordinate system;
  • the controller calculates the coordinate value of the projected point in the optical machine coordinate system:
  • the homography matrix can be calculated:
  • the controller judges whether an obstacle exists based on the above acquired data, if so, execute S2508, otherwise execute S2509;
  • the controller can obtain the feature points of the QR code, for example:
  • the obstacle avoidance algorithm uses the algorithm (OpenCV) library to complete the contour extraction of foreign objects when selecting the rectangle step in the trapezoidal correction algorithm process, and avoids the obstacle when selecting the rectangle to realize the projection obstacle avoidance function.
  • OpenCV algorithm
  • a schematic flowchart of implementing a screen-entry algorithm for a projection device includes the following steps:
  • the middleware obtains the QR code picture card captured by the camera:
  • the controller further acquires the coordinates of the preset image card in the optical-mechanical coordinate system:
  • the controller Based on the above-mentioned homography, the controller identifies the coordinates of the four vertices of the curtain captured by the camera:
  • the screen entry algorithm is based on the algorithm library (OpenCV), which can identify and extract the largest black closed rectangle outline, and judge whether it is a 16:9 size; project a specific picture card and use a camera to take photos, and extract more details in the photos.
  • OpenCV algorithm library
  • the corner points are used to calculate the homography between the projection surface (curtain) and the optical-mechanical display card, and the four vertices of the screen are converted to the optical-mechanical pixel coordinate system through homography, and the optical-mechanical graphic card is converted to the four vertices of the screen.
  • OpenCV algorithm library
  • the telephoto micro-projection TV has the characteristics of flexible movement, and the projection screen may be distorted after each displacement.
  • the projection equipment and obstacle avoidance projection detection method provided by this application , the display control method based on geometric correction can automatically complete the correction for the above problems, including automatic keystone correction, automatic screen entry, automatic obstacle avoidance, automatic focus, anti-eye and other functions.
  • the projection device and the obstacle-avoiding projection method provided by the present application project a projection image to a non-foreign object area after recognizing a foreign object, so as to realize obstacle-avoiding projection.
  • the device 2 for projection can perform obstacle detection, and thus can identify the screen. And use the projection change to correct the projected image to be displayed inside the screen to achieve the effect of aligning with the edge of the screen.
  • the device 2 used for projection does not have an obstacle avoidance function, the obstacle detection failure will occur. If the device 2 used for projection has an obstacle avoidance function, since the process of obstacle detection is greatly affected by environmental changes, for example, when there are light spots (bright spots and/or dark spots) in the projection area, the projection device may Misidentifying the light spot as an obstacle leads to unstable detection results, resulting in a small projection area after obstacle detection, which does not meet the user's projection requirements and reduces the user experience.
  • the apparatus 2 for projection may include an optical machine 220 , a camera 700 and a controller 500 .
  • the optical machine 220 is used to project the playing content to a projection area in a projection surface
  • the projection surface may be a wall or a curtain.
  • the camera 700 is used to capture projected images on the projected surface.
  • FIG. 28 shows a schematic flowchart of obstacle avoidance projection performed by a projection device in an embodiment of the present application.
  • the controller of the device 2 for projection can acquire the projected image and automatically perform obstacle detection on the projected image in response to the received projection command, and determine After there are no obstacles in the projection area, the projected image is projected, so as to realize the automatic obstacle avoidance function. That is to say, if there is an obstacle in the projection area, the projection area of the device 2 for projection before performing the obstacle avoidance process is different from the projection area after the obstacle avoidance process is completed.
  • the projection device 2 receives a projection instruction, and in response to the received projection instruction, activates the automatic obstacle avoidance function.
  • the projection instruction refers to a control instruction used to trigger the projection device 2 to automatically perform an obstacle avoidance process.
  • the projection instruction may be an instruction actively input by the user. For example, after the device 2 for projection is powered on, the device 2 for projection can project an image on the projection area in the projection plane. At this time, the user can press the preset automatic obstacle avoidance switch in the projection device 2, or the automatic obstacle avoidance button on the remote control of the projection device 2, so that the projection device 2 turns on the automatic obstacle avoidance function , to automatically detect obstacles in the projection area.
  • the controller controls the optical machine 220 to project the white image card to the projection area on the projection surface in response to the projection command.
  • the camera 700 is controlled to take images of the projection surface. Because the image area of the projection surface image captured by the camera 700 is larger than the image area of the projection area. Therefore, in order to obtain an image of the projection area, that is, a projection image, the controller can calculate the coordinate values of the four corner points and the midpoints of the four edges of the projection area in the optical machine 220 coordinate system based on the projection image captured by the camera 700 . And the angle relationship between the projection plane and the optical machine 220 is obtained based on the coordinate value fitting plane.
  • the corresponding coordinates of the four corner points and the midpoints of the four edges in the world coordinate system of the projection surface are obtained according to the angle relationship.
  • the coordinate values of the four corner points and the midpoints of the four edges of the projection area in the optical machine coordinate system are transformed into corresponding coordinate values in the camera coordinate system through the homography matrix.
  • the position and area of the projection area in the projection surface image are determined according to the coordinate values of the four corner points and the midpoints of the four edges in the camera coordinate system.
  • the controller uses an image contour detection algorithm to obtain multi-contour area information based on the projected image during the process of performing obstacle contour detection on the projected image.
  • the multi-contour area information includes the obstacle contour coordinate set.
  • the set of obstacle outline coordinates is used to represent a collection of multiple obstacle outline coordinates.
  • the contour level corresponding to the obstacle may be represented by contour parameters.
  • contour parameters include the index numbers of the next contour, previous contour, child contour, and parent contour. If there is no corresponding index number in the contour parameters of the obstacle, then assign the index number to a negative number (such as represented by -1).
  • profile A contains profile B, profile C, and profile D
  • profile A is the parent profile
  • profile B, profile C, and profile D are all child profiles of profile A.
  • contour position of contour C is at the top position of contour B, then contour C is the previous contour of contour B, and similarly, contour B is the next contour of contour C.
  • Fig. 29 shows a schematic diagram of obstacle sets and outline levels in the embodiment of the present application.
  • the obstacle set includes five closed contours: contour 1 , contour 2 , contour 2 a , contour 3 and contour 4 .
  • contour 1 and contour 2 are the outermost contours, that is, they have the same level relationship, and they are set to level 0.
  • Profile 2a is a sub-profile of profile 2, that is, profile 2a is a level, which is set as level 1.
  • Profile 3 and profile 4 are sub-profiles of profile 2a, that is, profile 3 and profile 4 are at the same level, set as level 2. Therefore, the profile parameters for profile 2 are characterized as [-1, 1, 2a, -1].
  • the controller filters the obstacle set according to the outline level to obtain the obstacle set; wherein, the obstacle set includes at least one obstacle whose outline level is the outermost layer. That is to say, if there is an outsourcing or embedding relationship among the contour relationships among multiple obstacles, it is only necessary to extract the obstacle corresponding to the outermost contour.
  • the purpose is that in the process of implementing the obstacle avoidance function, if the obstacle corresponding to the outermost contour is avoided, even if there is an obstacle corresponding to the inner contour corresponding to the outermost contour, it will also be avoided.
  • the contour with grade 0 is selected from the five closed contours of contour 1, contour 2, contour 2a, contour 3 and contour 4, which is the outermost contour.
  • an obstacle set is generated according to the outermost contour. Among them, the obstacle set includes contour 1 and contour 2.
  • the controller may update the obstacle set according to the image area of the projected image, so as to determine the non-obstacle area according to the updated obstacle set.
  • determining the non-obstacle area specifically includes, S3001: when the controller updates the obstacle set according to the image area of the projected image, the controller may acquire the center coordinates, width and height corresponding to each obstacle in the obstacle set.
  • the obstacle set includes contour 1 and contour 2 .
  • the area of contour 1 corresponding to contour 1 and the area of contour 2 corresponding to contour 2 are calculated.
  • the area of contour 1 occupies 5 pixels
  • the area of contour 2 occupies 30 pixels
  • the area threshold is 25 pixels. It can be seen that the area of the area corresponding to contour 1 is smaller than the area threshold.
  • the contour 1 in the obstacle set is deleted to complete the update of the obstacle set.
  • the projected image when the controller performs contour detection of obstacles on the projected image, can be processed in grayscale to obtain a grayscale image, and an edge detection algorithm can be used to extract edge coordinates in the grayscale image, and the edge The coordinates are denoised, and the edge coordinates after denoising are obtained.
  • the threshold binarization algorithm to segment the image after noise removal, that is, based on the pixels in the grayscale image whose color value is greater than the color threshold, generate a foreground image, and based on the pixels in the grayscale image whose color value is less than or equal to the color threshold, The background image is generated, where the color value of the pixel is a comprehensive attribute used to characterize the characteristics of the pixel.
  • the color value of the pixel is calculated based on the RGB value, brightness, grayscale, etc. of the pixel.
  • the images corresponding to the obstacles are distributed in On the foreground image, the background image is the background picture of the projected image, therefore, the contour detection of the obstacle object and the light spot object can be performed according to the foreground image.
  • the controller first performs an expansion algorithm operation on the edge coordinates during the process of controlling the execution of the noise removal process. That is, the pixel point coordinates in the edge coordinates are sequentially read and the structural element and the convolution kernel threshold are set, wherein the structural element is a 3 ⁇ 3 structural element such as a convolution kernel. Perform convolution calculation with all pixel point coordinates and the convolution kernel to obtain the first convolution result. If the first convolution result is greater than the convolution threshold, the pixel is set to 1, otherwise it is 0.
  • the slender image edge can be closed by dilation algorithm.
  • the structural elements may be structural diagrams of different sizes and ratios such as 3 ⁇ 3 and 5 ⁇ 5. This application only takes a 3 ⁇ 3 structure diagram and assigns 0 or 1 to a pixel as an example. According to the specific calculation logic and algorithm parameters, you can set the structural elements and assign values to the pixels.
  • the controller can control the erosion algorithm operation on the expanded image. Specifically, the convolution calculation is performed on the expanded pixel point coordinates and the convolution kernel to obtain the second convolution result. When the pixels in the second convolution result are all 1, the expanded pixel is 1, otherwise it is 0. Then complete the removal of noise stains in the pixel coordinates after dilation. At the same time, the boundary of the larger object can be smoothed without changing the area of the larger object in details.
  • the generated obstacle set includes not only the obstacles between the optical machine and the projection surface that will block the projection surface, but also the light spots in the projection surface.
  • the corresponding obstacle outline coordinate set includes not only the obstacle outline coordinates, It also includes the spot outline coordinate set, which causes the problem that the projection area is too small. Therefore, in order to avoid the influence of the spot on the projection surface on the obstacle outline detection, the spot target can be obtained according to the obstacle outline coordinate set and the spot outline coordinate set.
  • the light spot includes a bright spot.
  • the bright spot is formed in the projection surface due to the refraction of light, and presents a luminous pattern.
  • the set of light spot outline coordinates includes the bright spot outline coordinates. Since the brightness of the bright spot is usually greater than a certain value , therefore, the controller can identify bright spots based on the color values of each pixel in the foreground image.
  • the controller When the controller performs the contour detection of the spot target on the projected image, it can obtain the foreground image converted into a grayscale image, traverse the color value of each pixel in the foreground image, and compare the color value of each pixel with the preset brightness threshold Contrast, based on the pixel points in the foreground image whose color value is greater than the preset brightness threshold, obtain the bright spot image, perform denoising processing on the bright spot image, obtain the bright spot image after removing the noise, and use the contour detection algorithm to detect the noise after removing the noise
  • the outline coordinates of the bright spot in the bright spot image with the highest contour level can be obtained.
  • the process of the controller performing denoising processing on the bright speckle image may refer to the aforementioned process that the controller performs denoising processing on the contour coordinates of the obstacle when performing the contour detection of the obstacle.
  • contour detection corresponding to contours reference may be made to the above-mentioned process of performing the contour detection of the highest level of contours corresponding to obstacles, and details will not be repeated here.
  • the light spots also include dark spots, which are formed in the projection surface due to the occlusion of light, and appear as shadows.
  • the set of light spot outline coordinates includes the dark spot outline coordinates, and the controller can execute the light spot target on the projected image.
  • the contour detection of the projected image obtains the contour coordinates of the dark spot corresponding to the dark spot in the projected image.
  • the controller When the controller performs contour detection of the spot target on the projected image, it can acquire the HSV projected image converted from the projected image to the HSV color space.
  • Each pixel in the HSV projection image corresponds to a brightness parameter V, a hue parameter H, and a saturation parameter S.
  • the controller can use the Ostu algorithm (maximum inter-class variance algorithm) or an iterative method based on each pixel in the HSV projection image
  • the brightness parameter V, the hue parameter H and the saturation parameter S calculate the shadow threshold value of the HSV projection image, and use the difference value algorithm to calculate the shadow threshold value of each pixel according to the brightness parameter, hue parameter and saturation parameter of the pixel point in the described HSV projection image.
  • the controller traverses each pixel, obtains the pixel whose difference value component M is greater than the shadow threshold, and uses the morphological closing operation and the pixel point whose difference value component M is greater than the shadow threshold to obtain a dark spot image, and performs denoising processing on the dark spot image , the dark spot image after noise removal is obtained, and the dark spot image after noise removal is detected by the contour detection algorithm, and the dark spot contour coordinates in the dark spot image with the highest contour level can be obtained.
  • the process of the controller performing denoising processing on the dark speckle image may refer to the aforementioned process that the controller performs denoising processing on the contour coordinates of the obstacle when performing the contour detection of the obstacle, and the controller performs the highest level
  • the process of contour detection corresponding to contours reference may be made to the above-mentioned process of performing the contour detection of the highest level of contours corresponding to obstacles, and details will not be repeated here.
  • FIG. 32 is a schematic flowchart of updating the obstacle outline coordinate set in the embodiment of the present application.
  • the controller After the controller obtains the outline coordinates of bright spots and dark spots, it can call the obstacle outline coordinate set and perform the following steps:
  • S3202 Obtain a bright spot target according to the bright spot outline coordinate set
  • S3203 Calculate the first coincidence degree of the bright spot target relative to the obstacle target
  • the controller may determine the non-obstacle area in the projected image based on the updated obstacle outline coordinate set.
  • the non-obstacle area is an area in the projected image other than the area corresponding to the obstacle.
  • the controller obtains the obstacle contour coordinates corresponding to each obstacle in the obstacle contour coordinate set and the image coordinate set corresponding to the projected image. The obstacle contour coordinates are removed from the image coordinate set to determine the non-obstacle area according to the image coordinate set after removing the obstacle contour coordinates.
  • the non-obstacle area is a polygonal area.
  • the controller can extract a pre-projection area in the non-obstacle area, the pre-projection area is a rectangular area in the non-obstacle area, and the controller calculates the projection surface according to the extracted pre-projection area and the shooting parameters of the camera in the projection area, and control the light machine to project the playback content to the projection area.
  • Fig. 33 shows a schematic diagram of a rectangular grid and a non-obstacle area in the embodiment of the present application.
  • the controller may obtain corner coordinates of the projected image, where the corner coordinates are coordinates corresponding to four vertices and/or midpoints of four sides of the projected image.
  • a rectangular grid is constructed based on the corner point coordinates, and the rectangular grid includes M ⁇ N grids. Then, traverse all the grids, and judge the inclusion relationship between each grid and the non-obstacle area. If the grid is located in a non-obstacle area, the grid ID of the grid is assigned a value of 1. If the grid is not located in the non-obstacle area, the grid ID of the grid is assigned a value of 0.
  • the controller may search the rectangular grid for a rectangular area formed by grids whose grid identifier is 1, and determine the rectangular area as the pre-projection area. Furthermore, according to the shooting parameters of the camera 700, the pre-projection area in the projection image is converted to the projection area on the projection surface, and the optical machine 220 is controlled to project the playback content into the projection area to realize the automatic obstacle avoidance function.
  • the controller should search for the rectangular area formed by the grid whose grid ID is 1 in the rectangular grid.
  • the largest rectangular area formed by the grid that is, to obtain the largest rectangular area in the non-obstacle area.
  • all rectangular areas formed by grids whose grid identifier is 1 are traversed to obtain the number of pixels in each rectangular area.
  • a rectangular area with the largest number of pixels is extracted, and a pre-projection area is determined based on boundary coordinates of the rectangular area with the largest number of pixels.
  • the controller in order to avoid that the area area of the pre-projection area is too small to affect the viewing experience of the user, after the controller obtains the rectangular area in the non-obstacle area, it can calculate the area area of the rectangular area and the image of the projected image The area ratio of the area, and set the area threshold. If the area ratio is greater than the area threshold, it means that the area of the rectangular area satisfies the area area condition, and the rectangular area is determined as the pre-projection area.
  • the controller determines the pre-projection area, if the number of the largest rectangular areas to be searched is multiple, the maximum number of rectangular areas in multiple Extract the rectangular area with the center point of the projected figure as the extended baseline, so as to calculate the area ratio based on the extracted rectangular area.
  • the controller executes the process of updating the non-obstacle area. And extract the pre-projection area again in the updated non-obstacle area, so as to determine the projection area in the projection surface according to the pre-projection area.
  • the controller may optimize the picture quality of the projected image by adjusting the brightness of the projected picture, so that the brightness distribution of the adjusted projected picture is more uniform.
  • the controller may perform the following steps to adjust the projected image:
  • the controller can obtain the HSV projection image converted from the projection screen to the HSV color space, wherein each pixel in the HSV projection image corresponds to a brightness parameter V, a hue parameter H, and a saturation parameter S;
  • the controller can perform Gaussian function convolution processing on the brightness parameters of each pixel in the HSV projection image to obtain the brightness component corresponding to each pixel;
  • S3403 the controller can perform gamma function processing on the brightness component to obtain the target brightness parameter
  • the controller reorganizes the HSV projected image based on the target brightness parameter, the hue parameter H, and the saturation parameter S, so as to adjust the brightness of the HSV projected image.
  • the controller can acquire the grayscale image converted from the projected image into the grayscale space, and calculate the average brightness of the grayscale image based on the brightness value of each pixel in the grayscale image. And, the controller may control to divide the grayscale image into a preset number of image regions, and calculate the average brightness of each image region based on the brightness value of each pixel in each image region. The controller adjusts the brightness value of each pixel in the image area based on the difference between the average brightness of the grayscale image and the average brightness of the image area, wherein the adjustment range of the brightness of each pixel in the image area They may be the same or different, so that the average value corresponding to the brightness value of the pixels in each image area after adjustment is equal to the average brightness value of the grayscale image.
  • the controller can also use adaptive local The histogram equalization algorithm directly optimizes the projection image, making the brightness distribution of the optimized projection image more uniform and improving the image quality.
  • this application proposes an obstacle avoidance projection method, which is applied to a projection device.
  • the projection device includes an optical machine, a camera, and a controller.
  • the obstacle avoidance projection method includes:
  • the projection image in the projection surface captured by the camera is obtained; the contour detection of the obstacle target and the spot target is performed on the projection image based on the color parameters, and the obstacle contour coordinate set and the light spot contour coordinate set are obtained.
  • the color parameters include Brightness parameters, hue parameters, and saturation parameters; according to the obstacle contour coordinate set and the spot contour coordinate set, obtain the coincidence degree of the spot target relative to the obstacle target; if the coincidence degree is greater than the preset coincidence degree threshold, delete the obstacle contour coordinate set
  • the obstacle outline coordinates corresponding to the obstacle target based on the deleted obstacle outline coordinate set, determine the non-obstacle area, and control the optical machine to project the playback content to the projection area according to the non-obstacle area.
  • the projected image before the contour detection step of the obstacle target and the spot target is performed on the projected image based on the color parameters, can be grayscale processed to obtain a grayscale image; edges in the grayscale image can be extracted using an edge detection algorithm coordinates; perform denoising processing on the edge coordinates to obtain the edge coordinates after denoising; calculate the color threshold based on the color value of the pixel at the edge coordinate position; generate the foreground based on the pixel in the grayscale image whose color value is greater than the color threshold image to perform contour detection of obstacle objects and spot objects from the foreground image.
  • the average brightness of the grayscale image can be calculated based on the brightness values of the pixels in the grayscale image; the grayscale image is divided into a preset number The image area, based on the brightness value of each pixel in the image area, calculate the average brightness of the image area; based on the difference between the average brightness of the grayscale image and the average brightness of the image area, adjust each pixel in the image area The brightness value of the point.
  • obtaining the grayscale image includes: converting the projected image to the HSV color space to obtain the HSV projected image; performing Gaussian function convolution processing on the brightness parameters of the HSV projected image to obtain the brightness component; performing gamma on the brightness component
  • the target brightness parameter is obtained through function processing; based on the target brightness parameter, the brightness of the HSV projection image is adjusted; grayscale processing is performed on the adjusted HSV projection image to obtain a grayscale image.
  • obtaining the obstacle contour coordinate set includes: obtaining the obstacle set according to the obstacle contour coordinate set, the obstacle set includes at least one obstacle whose contour level is the outermost layer, and the contour level is used to represent the distance between obstacles outsourcing or embedding relationship; obtain the center coordinates, width and height of the obstacles in the obstacle set; calculate the obstacle area corresponding to the obstacle according to the center coordinates, width and height;
  • the obstacle outline coordinate set is updated according to the updated obstacle set.
  • the spot target includes a bright spot
  • the spot contour coordinate set includes the bright spot contour coordinates
  • the contour detection of the obstacle target and the spot target is performed on the projected image based on color parameters, including: based on the color value in the foreground image being greater than a preset
  • the pixels of the brightness threshold are obtained to obtain the bright spot image; the bright spot image is denoised to obtain the bright spot image after noise removal; the bright spot image after the noise removal is detected by the contour detection algorithm, and the bright spot image in the bright spot image is obtained Spot outline coordinates.
  • the spot target includes a dark spot
  • the spot contour coordinate set includes dark spot contour coordinates
  • performing contour detection of the obstacle target and the spot target on the projected image based on color parameters includes: converting the projected image to an HSV color space, Obtain the HSV projection image; use the maximum inter-class variance algorithm to calculate the shadow threshold of the HSV projection image according to the brightness parameters, hue parameters and saturation parameters of the pixels in the HSV projection image; use the difference value algorithm to calculate the shadow threshold of the pixel point in the HSV projection image
  • the brightness parameter, hue parameter and saturation parameter calculate the difference value component of the pixel point; based on the pixel point in the HSV projection image whose difference value component is greater than the shadow threshold, obtain the dark spot image; use the contour detection algorithm to detect the dark spot image and obtain the dark spot image Dark spot contour coordinates in .
  • deleting the obstacle outline coordinates corresponding to the obstacle target in the obstacle outline coordinate set further includes: if it is detected that the coincidence degree of the bright speckle target with respect to the obstacle target is greater than a preset coincidence threshold, then combining with The obstacle contour coordinates corresponding to the obstacle target whose coincidence degree between the bright speckle targets is greater than the preset coincidence threshold are deleted from the obstacle contour coordinate set; and, if the coincidence degree of the detected dark speckle target is greater than the preset If the coincidence threshold is set, the obstacle contour coordinates corresponding to the obstacle targets whose coincidence degree between dark spot targets is greater than the preset coincidence threshold are deleted from the obstacle contour coordinate set.
  • the step of controlling the optical machine to project the playback content to the projection area according to the non-obstacle area includes: obtaining the rectangular area in the non-obstacle area and the number of pixels in the rectangular area; The boundary coordinates of the number of rectangular areas determine the pre-projection area; calculate the projection area in the projection surface according to the pre-projection area and the shooting parameters of the camera, and control the optical machine to project the playback content to the projection area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

本申请公开一种投影设备及避障投影方法,投影设备(2)包括光机、镜头、距离传感器、图像采集装置和控制器,方法包括:响应于投影设备(2)的开机指令,启动投影设备(2);根据镜头、距离传感器和图像采集装置在第一平面上的位置关系,以及距离传感器的距离检测值,检测投影设备(2)与投影面(1)之间是否存在遮挡物;其中,第一平面为投影时投影设备(2)上与投影面(1)平行的平面,投影面(1)用于接收并显示光机投射的投影内容;若检测到投影设备(2)与投影面(1)之间存在遮挡物,控制发出用于提示移除遮挡物的提示信息。本申请在检测到投影设备(2)前具有遮挡物时,能够及时提示用户移除遮挡物,既避免投影光灼伤遮挡物,也避免遮挡物影响投影内容的显示效果。

Description

投影设备及避障投影方法
相关申请的交叉引用
本申请要求在2021年11月16日提交、申请号为202111355866.0;在2022年05月26日提交、申请号为202210590075.4;在2022年05月30日提交、申请号为202210600617.1的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及投影技术领域,尤其涉及一种投影设备及避障投影方法。
背景技术
投影设备是基于成像技术,将媒体数据投射到如墙壁、幕布、屏幕等投影介质上,使投影介质呈现媒体数据,使用者可将投影仪固定放置在指定位置,也可移动投影设备,以适应投影位置和方向的要求。
在投影设备使用过程中,第一平面的前方可能具有遮挡物,若遮挡物遮挡了投影设备的镜头,一方面会影响媒体数据的投影显示,另一方面镜头投射的光温度较高,极易导致遮挡物被高温灼伤,尤其是遮挡物燃点低时甚至可能引发火险;此外,若遮挡物遮挡了投影设备上的相关元件,例如遮挡了摄像头或距离传感器等,也会影响投影设备自身的调焦和校正,导致投影异常。
发明内容
本申请实施方式提供的投影设备,包括:镜头;光机,被配置为投射投影内容至投影面;距离传感器,被配置为检测投影面与光机之间的距离检测值;图像采集装置,被配置为拍摄投影内容的图像;
控制器,被配置为:响应于投影设备的开机指令,启动所述投影设备;根据所述镜头、所述距离传感器和所述图像采集装置在第一平面上的位置关系,以及所述距离传感器的距离检测值,检测投影设备与所述投影面之间是否存在遮挡物;其中,所述第一平面为投影时投影设备上与所述投影面平行的平面;若检测到所述投影设备与所述投影面之间存在遮挡物,控制发出用于提示移除遮挡物的提示信息。
本申请实施方式提供一种应用于投影设备的避障投影方法,所述投影设备包括光机、镜头、距离传感器、图像采集装置和控制器,所述方法包括:响应于投影设备的开机指令,启动所述投影设备;根据所述镜头、所述距离传感器和所述图像采集装置在第一平面上的位置关系,以及所述距离传感器的距离检测值,检测投影设备与投影面之间是否存在遮挡物;其中,所述第一平面为投影时投影设备上与所述投影面平行的平面,所述投影面用于接收并显示所述光机投射的投影内容;若检测到所述投影设备与所述投影面之间存在遮挡物,控制发出用于提示移除遮挡物的提示信息。
附图说明
图1为本申请实施例中投影设备的投影场景示意图;
图2为本申请实施例中投影设备光路示意图;
图3为本申请实施例中投影设备的电路架构示意图;
图4为本申请实施例中投影设备光路示意图;
图5为本申请实施例中投影设备的电路结构示意图:
图6为本申请实施例中投影设备的镜头结构示意图;
图7为本申请实施例中投影设备的距离传感器和相机结构示意图;
图8为本申请实施例中投影设备实现显示控制的系统框架示意图;
图9为本申请实施例中一种投影设备的第一平面结构示意图;
图10为本申请实施例中遮挡场景一的示意图;
图11为本申请实施例中遮挡场景二的示意图;
图12为本申请实施例中遮挡场景三的示意图;
图13为本申请实施例中遮挡场景四的示意图;
图14为本申请实施例中另一种投影设备的第一平面结构示意图;
图15为本申请实施例中遮挡场景五的示意图;
图16为本申请实施例中遮挡场景六的示意图;
图17为本申请实施例中遮挡场景七的示意图;
图18为本申请实施例中和遮挡场景八的示意图;
图19为本申请实施例中第一种投影检测方法的流程图;
图20为本申请实施例中第二种投影检测方法的流程图;
图21为本申请实施例中第三种投影检测方法的流程图;
图22为本申请实施例投影设备实现放射眼功能的信令交互时序示意图;
图23为本申请实施例投影设备实现显示画面校正功能的信令交互时序示意图;
图24为本申请实施例投影设备实现自动对焦算法的流程示意图;
图25为本申请实施例投影设备实现梯形校正、避障算法的流程示意图;
图26为本申请实施例投影设备实现入幕算法的流程示意图;
图27为本申请实施例投影设备实现防射眼算法的流程示意图;
图28为本申请实施例中投影设备进行避障投影的流程示意图;
图29为本申请实施例中障碍物集合以及轮廓层级的示意图;
图30为本申请实施例中更新障碍物轮廓坐标集的流程示意图;
图31为本申请实施例中投影区域变化的示意图;
图32为本申请实施例中更新障碍物轮廓坐标集的流程示意图;
图33为本申请实施例中矩形网格和非障碍物区域的示意图;
图34为本申请实施例中重组HSV图像的流程示意图。
具体实施方式
投影设备是一种可以将媒体数据投射到投影介质上的设备,投影设备可以通过不同的接口与计算机、广电网络、互联网、VCD(Video Compact Disc:视频高密光盘)、DVD(Digital Versatile Disc Recordable:数字化视频光盘)、游戏机、DV(Digital Video:数码摄像机)等设备连接,以接收需要投影的媒体数据。其中,所述媒体数据包括但不限于图像、视频、文本等类型,投影介质包括但不限于墙壁、幕布、屏幕等实体形式。
图1示出了本申请一实施例投影设备的投影场景示意图,图2示出了投影设备光路示意图。
在一些实施例中,参考图1和图2,本申请提供的一种投影设备包括投影介质1和用于投影的装置2。投影介质1固定于第一位置上,用于投影的装置2放置于第二位置上,通过调试第一位置和第二位置的关系,使用于投影的装置2的投射画面与投影介质1的投影面吻合。用于投影的装置2包括投影组件,投影组件包括激光光源210,光机220和镜头230。其中,激光光源210为光机220提供照明,光机220对光源光束进行调制并输出至镜头230,镜头230进行成像并投射至投影介质1,由投影介质1呈现投影画面。
在一些实施例中,用于投影的装置2的激光光源210包括激光器组件和光学镜片组件,激光器组件发出的光束可透过光学镜片组件,进而为光机220提供照明。其中,例如,光学镜片组件需要较高等级的环境洁净度、气密等级密封;而安装激光器组件的腔室可以采用密封等级较低的防尘等级密封,以降低密封成本。
在一些实施例中,用于投影的装置2的光机220可包括蓝色光机、绿色光机和红色光机,还可以包括散热系统、电路控制系统等。
在一些实施例中,投影设备的发光部件还可以通过LED光源实现。
图3示出了投影设备的电路架构示意图。在一些实施例中,用于投影的装置2可以包括显示控制电路、激光光源、至少一个激光器驱动组件以及至少一个亮度传感器,该激光光源可包括与至少一个激光器驱动组件一一对应的至少一个激光器。其中,该至少一个是指一个或多个,多个是指两个或两个以上。
基于该电路架构,投影设备可以实现自适应调整。例如,通过在激光光源210的出光路径中设置亮度传感器,使亮度传感器260可以检测激光光源的第一亮度值,并将第一亮度值发送至显示控制电路。显示控制电路可以获取每个激光器的驱动电流对应的第二亮度值,并在确定该激光器的第二亮度值与该激光器的第一亮度值的差值大于差值阈值时,确定该激光器发生COD(Catastrophic optical damage,光学灾变损伤)故障,则显示控制电路可以调整激光器对应的激光器驱动组件的电流控制信号,直至前述差值小于或等于阈值,从而消除激光器的COD故障;该投影设备能够及时消除激光器的COD故障,降低激光器的损坏率,提高投影设备的图像显示效果。
在一些实施例中,参照图4,用于投影的装置2中的激光光源210可以包括光学组件310以及独立设置的蓝色激光器211、红色激光器212和绿色激光器213,该投影设备也可以称为三色投影设备,蓝色激光器211、红色激光器212和绿色激光器213均为模块轻量化(Mirai Console Loader,MCL)封装激光器,体积小,利于光路的紧凑排布。
图5示出了本申请一实施例投影设备的电路结构示意图。
在一些实施例中,激光器驱动组件可以包括驱动电路301、开关电路302和放大电路303。该驱动电路301可以为驱动芯片。该开关电路302可以为金属氧化物半导体(metal-oxide-semiconductor,MOS)管。
其中,该驱动电路301分别与开关电路302、放大电路303以及激光光源210所包括的对应的激光器连接。该驱动电路301用于基于显示控制电路发送的电流控制信号通过VOUT端向激光光源210中对应的激光器输出驱动电流,并通过ENOUT端将接收到的使能信号传输至开关电路302。
显示控制电路还用于将放大后的驱动电压确定为激光器的驱动电流,并获取该驱动电流对应的第二亮度值。
在一些实施例中,放大电路303可以包括:第一运算放大器A1、第一电阻(又称取样功率电阻)R1、第二电阻R2、第三电阻R3和第四电阻R4。
在一些实施例中,显示控制电路,还用于当激光器的第二亮度值与激光器的第一亮度值的差值小于 等于差值阈值时,恢复与激光器对应的激光器驱动组件的电流控制信号至初始值,该初始值为正常状态下对激光器的PWM电流控制信号的大小。从而,当激光器发生COD故障时,可以快速的识别,并及时采取降低驱动电流的措施,减轻激光器自身的持续损伤,帮助其自恢复,整个过程中不需要拆机和人为干涉,提高了激光器光源使用的可靠性,保证了激光投影设备的投影显示质量。
在一些实施例中,用于投影的装置2包括控制器,所述控制器与投影设备的相关硬件,例如与显示控制电路、亮度传感器、距离传感器、图像采集装置等硬件连接,用于控制投影设备的投影、调焦、校正、遮挡物检测、遮挡提示、开关屏状态调节等功能的实现。
在一些实施例中,投影设备的机身上可设置有若干类型的接口,例如电源接口、USB接口、HDMI(High Definition Multimedia Interface,高清多媒体接口)接口、网线接口、VGA(Video Graphics Array,视频图像阵列)接口、DVI(Digital Visual Interface,数字视频接口)等,以连接用于传输媒体的信号源。
在一些实施例中,投影设备启动后可直接进入上次选择的信号源的显示界面,或者信号源选择界面,其中信号源例如是预置的视频点播程序,还可以是HDMI接口、USB接口、直播电视接口等信号源中的一种。用户选择目标信号源后,用于投影的装置2可以从目标信号源获取媒体数据,并将媒体数据投射于投影介质1上进行显示。
在一些实施例中,用于投影的装置2可以配置图像采集装置,用于与投影设备协同运行,以实现对投影过程的调节控制。例如,投影设备可配置相机如3D相机、单目或双目相机;其中,相机可以用于拍摄投影面中显示的图像,可以是摄像头。摄像头可以包括镜头组件,镜头组件中设有感光元件和透镜。透镜通过多个镜片对光线的折射作用,使景物的图像的光能够照射在感光元件上。感光元件可以根据摄像头的规格选用基于电荷耦合器件或互补金属氧化物半导体的检测原理,通过光感材料将光信号转化为电信号,并将转化后的电信号输出成图像数据。
图6示出了在一些实施例中用于投影的装置2的镜头结构示意图。为了支持用于投影的装置2的自动调焦过程,如图6所示,用于投影的装置2的镜头300还可以包括光学组件310和驱动马达320。其中,光学组件310是由一个或多个透镜组成的透镜组,可以对光机220发射的光线进行折射,使光机220发出的光线能够透射到投影面上,形成透射内容影像。
光学组件310可以包括镜筒以及设置在镜筒内的多个透镜。根据透镜位置是否能够移动,光学组件310中的透镜可以划分为移动镜片311和固定镜片312,通过改变移动镜片311的位置,调整移动镜片311和固定镜片312之间的距离,改变光学组件310整体焦距。因此,驱动马达320可以通过连接光学组件310中的移动镜片311,带动移动镜片311进行位置移动,实现自动调焦功能。
需要说明的是,本申请部分实施例中所述的调焦过程是指通过驱动马达320改变移动镜片311的位置,从而调整移动镜片311相对于固定镜片312之间的距离,即调整像面位置,因此光学组件310中镜片组合的成像原理,所述调整焦距实则为调整像距,但就光学组件310的整体结构而言,调整移动镜片311的位置等效于调节光学组件310的整体焦距调整。
当用于投影的装置2与投影面之间相距不同距离时,需要用于投影的装置2的镜头调整不同的焦距从而在投影面上透射清晰的图像。而在投影过程中,用于投影的装置2与投影面的间隔距离会受用户的摆放位置的不同而需要不同的焦距。因此,为适应不同的使用场景,用于投影的装置2需要调节光学组件310的焦距。
图7示出了在一些实施例中距离传感器和相机结构示意图。如图7所示,用于投影的装置2可以包括光机220、投影介质1以及距离传感器600,还可以内置或外接相机700,相机700可以对用于投影的装置2投射的画面进行图像拍摄,以获取投影图像。用于投影的装置2再通过对投射内容图像进行清 晰度检测,确定当前镜头焦距是否合适,并在不合适时进行焦距调整。基于相机700拍摄的投影图像进行自动调焦时,用于投影的装置2可以通过不断调整镜头位置并拍照,并通过对比前后位置图片的清晰度找到调焦位置,从而将光学组件310中的移动镜片311调整至合适的位置。例如,控制器500可以先控制驱动马达320将移动镜片311调焦起点位置逐渐移动至调焦终点位置,并在此期间不断通过相机700获取投影图像。再通过对多个投影图像进行清晰度检测,确定清晰度最高的位置,最后控制驱动马达320将移动镜片311从调焦终端调整到清晰度最高的位置,完成自动调焦。
在投影设备设置双目相机时,根据双目相机在设备第一平面上的安装侧位,具体包括左相机(第一相机)和右相机(第二相机)。在双目相机未被遮挡时,可同时采集到投影介质1呈现投影画面的图像;若双目相机中至少有一个相机被遮挡,则被遮挡的相机拍摄的图像中无投影内容的呈现,例如遮挡物是红色的,则被遮挡的相机可能采集到纯红色图像。所述第一平面是用于投影的装置2的外壳平面中,与投影介质1的投影面平行且相对的平面。
在一些实施例中,图8示例一种投影设备实现显示控制的系统框架示意图,包括应用程序服务层、进程通信框架、操作层、框架层、校正服务、摄像头服务、飞行时间服务、以及硬件及其驱动等。
应用程序服务层用于实现投影设备和用户之间的交互;基于用户界面的显示,用户可对投影设备的各项参数以及显示画面进行配置,控制器通过协调、调用各种功能对应的算法服务,可实现投影设备在显示异常时自动校正其显示画面的功能。
服务层可包括校正服务、摄像头服务、飞行时间(TOF:Time of Flight)服务等内容,所述服务向上可对焦应用程序服务层(APK Service),实现投影设备不同服务配置的对应特定功能;服务层向下对接算法库、相机、飞行时间传感器等数据采集业务,实现封装底层复杂逻辑、并将业务数据传送至对应服务层的功能。
底层算法库可提供校正服务、和投影设备实现各种功能的控制算法,所述算法库例如可基于OpenCV完成各种数学运算,以实现为校正服务提供基础能力。OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉和机器学习软件库,可以运行在Linux、Windows、Android和Mac OS等操作系统上。
用于投影的装置2具备长焦微投的特点,控制器控制整体系统架构,基于底层程序逻辑实现对投影设备的投影控制,包括但不限于投影画面自动梯形校正、自动入幕、自动避障、自动调焦、防射眼、遮挡物检测、遮挡提示、开关屏控制等功能。
在一些实施例中,用于投影的装置2配置有陀螺仪传感器,在投影设备移动过程中,陀螺仪传感器可感知投影设备的位移并主动采集位置数据;然后通过框架层将已采集的位置数据发送至应用程序服务层,支撑用户界面交互、应用程序交互过程中所需应用数据,位置数据还可用于控制器在算法服务实现中的数据调用。
在一些实施例中,用于投影的装置2还配置有用于检测距离的距离传感器,距离传感器可采用飞行时间(Time of Flight,TOF)传感器,飞行时间传感器是利用信号在发射端和反射端之间往返的飞行时间来测量节点间的距离,在飞行时间传感器采集到距离数据后,将距离数据发送给飞行时间服务;飞行时间服务获取距离数据后,将采集的距离数据通过进程通信框架发送至应用程序服务层,距离数据将用于控制器的数据调用、用户界面、程序应用等交互使用,以及作为遮挡物检测的参考数据之一。
在一些实施例中,用于投影的装置2还可配置图像采集装置,所述图像采集装置可采用双目相机、深度相机或3D相机等;图像采集装置将采集的图像数据发送至摄像头服务,然后由摄像头服务将图像数据发送至进程通信框架和/或校正服务;进程通信框架将图像数据发送至应用程序服务层,图像数据将用于控制器的数据调用、用户界面、程序应用等交互使用,以及作为遮挡物检测的参考数据之二。
在一些实施例中,通过进程通信框架与应用程序服务进行数据交互,然后经进程通信框架将投影校正参数反馈至校正服务;校正服务将投影校正参数发送至投影设备的操作层,由操作系统根据投影校正参数生成校正指令,并将校正信令下发给光机控制驱动模块,以使光机驱动模块根据投影校正参数,调节光机工况,完成投影画面的自动校正。
在一些实施例中,当检测到校正指令时,投影设备可以对投影画面进行校正。可预先创建距离、水平夹角及偏移角之间的关联关系,然后投影设备的控制器通过获取光机至投影介质1的当前距离,结合所属关联关系,确定当前时刻光机与投影介质1的目标夹角,实现投影画面校正。其中,所述目标夹角具体实施为光机中轴线与投影介质1的投影面之间的夹角。
在一些实施例中,投影设备自动校正完成后可重新调焦,控制器检测自动调焦功能是否开启;若自动调焦功能未开启,控制器将结束自动调焦业务;若自动调焦功能已开启,控制器根据飞行时间传感器的距离检测值进行调焦计算。
在一些实施例中,控制器根据飞行时间传感器的距离检测值,查询预设映射表,预设映射表记录有距离与焦距的映射关系,从而获取该距离检测值对应的投影设备的焦距;然后中间件将获取的焦距下发给投影设备的光机;光机按照上述焦距进行激光发射后,由至少一个相机拍摄投影内容图像,控制器对投射内容图像进行清晰度检测,确定当前的镜头焦距是否合适,若焦距不合适,需要进行调焦处理。投影设备通过调整镜头位置并拍摄,以及对比调整前后投射内容图像的清晰度变化,定位清晰度最高的调焦位置。
如果判定结果符合预设完成条件,则控制自动调焦流程结束;如果判定结果不符合预设完成条件,中间件将微调投影设备光机的焦距参数,例如按照预设步长逐渐微调焦距,并将调整的焦距参数再次设置于光机,通过多次拍照、清晰度评价等步骤,最终通过投影画面的清晰度对比,锁定最优焦距,从而完成自动调焦。
在一些实施例中,投影设备的第一平面上至少设置有镜头、距离传感器和图像采集装置,图像采集装置可包括一个或多个相机。其中,所述第一平面是在投影时用于投影的装置2上与投影面平行且相对的平面。
在一些实施例中,参照图9示例的第一平面结构,在第一平面上设置有第一测量组件和第二测量组件250,第一测量组件包括第一相机241,第二测量组件250包括第二相机251和距离传感器252。第一测量组件设置于第一平面的第一位置(对应第一平面的左侧区域),第二测量组件250设置于第一平面的第二位置(对应第一平面的右侧区域),第一相机241和第二相机251包含于图像采集装置内,第一相机241和第二相机251应具有一定的距离间隔。第一测量组件和第二测量组件250位于第一平面的哪一侧不影响本方案的实质。
在一些实施例中,参照图9,投影组件中的镜头230与第一相机241在第一平面上的同侧,即均位于第一平面的左侧,并且镜头230和第一相机241之间的间距较小,位置关系紧凑。同样地,第二相机251和距离传感器252的间距较小,位置紧凑。
在一些实施例中,镜头230、第一相机241、第二相机251和距离传感器252的中心点可设置为等高。
在一些实施例中,距离传感器252可采用TOF传感器,或者其他用于检测距离的传感器;第一相机241和第二相机251可采用3D相机、深度相机等。
在一些实施例中,基于图9示例的投影设备结构,可包括如下四种遮挡场景:
遮挡场景一:参照图10的示例,遮挡物相对较小,遮挡范围仅覆盖第一平面的左侧区域,即仅遮挡镜头230和第一相机241,而右侧的第二相机251和距离传感器252未被遮挡。
遮挡场景二:参照图11的示例,遮挡物相对较小,遮挡范围仅覆盖第一平面的右侧区域,即仅遮挡第二相机251和距离传感器252,而左侧的镜头230和第一相机241未被遮挡。
遮挡场景三:参照图12的示例,遮挡物相对较大,遮挡范围覆盖第一平面的左右两侧,即镜头230、第一相机241、第二相机251和距离传感器252都被遮挡。
遮挡场景四:参照图13的示例,投影设备前无任何遮挡物,即镜头230、第一相机241、第二相机251和距离传感器252均未被遮挡。对于上述遮挡场景一和遮挡场景三,由于镜头230被遮挡,一方面会影响媒体数据的投影显示,另一方面镜头230投射的光温度较高,极易导致遮挡物被高温灼伤,尤其是遮挡物燃点较低时甚至可能引发火险,存在安全隐患;对于上述遮挡场景二,虽然镜头230而被遮挡,但距离传感器252却被遮挡,这将导致投影设备无法精准自校正。由此可见,对投影设备进行遮挡物检测,并及时提示用户移除遮挡物是十分必要的。
在一些实施例中,控制器在接收到开机广播或待机广播时,其中待机广播包括STR(Suspend to RAM,挂起到内存)广播,利用距离传感器252进行距离检测,若满足遮挡场景二和遮挡场景三,则距离传感器252发射的信号在中途遇到遮挡物时就会被反射回来,导致距离检测值较小;若满足遮挡场景四,距离传感器252发射的信号会被投影介质1反射回来,此场景下到的距离检测值等于投影距离L,所述投影距离L为投影面与光机之间的间隔距离。
因此,控制器在获取距离传感器252的距离检测值时,将该距离检测值与预设的距离阈值进行比较。若距离检测值小于或等于距离阈值,则可能满足遮挡场景二,即镜头230未被遮挡,距离传感器252被遮挡,该场景下虽然镜头230发出的投影光不会灼烧遮挡物,但遮挡物会影响投影设备的自动校正,因此投影设备需要提示移除遮挡物;或者,若距离检测值小于或等于距离阈值,则可能满足遮挡场景三,即镜头230和距离传感器252均被遮挡,不仅会影响投影设备的自动校正,还会导致遮挡物被高温投影光灼伤,因此也要提示移除遮挡物。其中,0<距离阈值<L,L表示镜头230与投影介质1的投影面之间的距离,即投影距离,距离阈值可以基于镜头与遮挡物之间的安全距离而设定,即以避免镜头发出的投影光对遮挡物的高温灼伤风险为目的设定的阈值,例如安全距离为10cm,即在小于10cm的区域内的遮挡物会被投影光灼伤,则设置距离阈值大于或等于该安全距离。
在一些实施例中,若距离检测值小于或等于距离阈值,则控制器控制投影设备提示移除遮挡物,以及,记录投影设备处于有遮挡物状态。
在一些实施例中,参照图9,投影设备可配置有语音播放器260,并通过语音播放器260播报提示信息,所述提示信息例如为“请尽快移除投影设备前的遮挡物”;或者,投影设备控制投影介质1的投影画面上显示用于提示移除遮挡物的文字信息;又或者,投影设备向可通信的关联电子设备推送提示信息,所述电子设备例如是智能手机、平板电脑、计算机等。投影设备进行移除遮挡物的提示方式不限于本申请的示例。
在一些实施例中,投影设备的系统中可设置一个遮挡状态标志位,遮挡状态标志位用于记录和指示投影设备的遮挡状态,所述遮挡状态包括无遮挡物状态和有遮挡物状态,其中,有遮挡物状态用于指示投影设备与投影面之间存在遮挡物,无遮挡物状态用于指示投影设备与投影面之间不存在遮挡物。例如遮挡状态标志位的状态值设置为0则表征无遮挡物状态,遮挡状态标志位的状态值设置为1则表征有遮挡物状态。
在一些实施例中,若距离检测值小于或等于距离阈值,则遮挡状态标志位记录为有遮挡物状态,投影设备提示移除遮挡物,用户在移除遮挡物后,投影设备检测到投影设备前无遮挡物(即满足遮挡场景四),则更新遮挡状态标志位的状态值,使遮挡状态变更为无遮挡物状态。投影设备在检测到出现遮挡物或者遮挡物被移除,需要同步变更遮挡状态标志位记录的状态值。
在一些实施例中,若距离检测值小于或等于距离阈值,则可能是产生遮挡场景二或遮挡场景三,为判别具体为哪种类型的遮挡场景,控制器可启动第一相机241和第二相机251,获取第一相机241采集的第一图像,获取第二相机251采集的第二图像,并比较第一图像和第二图像。
对于遮挡场景二,右侧的第二相机251和距离传感器252被遮挡,而左侧的镜头230和第一相机241未被遮挡,因此第一图像是正常采集的投影画面图像,由于第二相机251的镜头与遮挡物距离较近,导致第二相机251的镜头光折射出来的画面与第一图像不同,例如遮挡物为黑色,则第二图像呈现为纯黑图像,此场景下第一图像和第二图像的相似性较低。
对于遮挡场景三,由于镜头230、第一相机241、第二相机251和距离传感器252都被同一遮挡物遮挡,而遮挡物表面的颜色和纹理是接近或一致的,因此该场景下拍摄的第一图像和第二图像的相似性较高。
在一些实施例中,在距离检测值小于或等于距离阈值时,控制器进一步计算第一图像与第二图像的相似度,并比较该相似度与预设的相似度阈值。若相似度大于或等于相似度阈值,即认为第一图像和第二图像的相似性较高,判定当前为遮挡场景三;若相似度小于相似度阈值,即认为第一图像与第二图像的相似性较低,判定当前为遮挡场景二。
在一些实施例中,控制器若判定产生遮挡场景二,由于该场景下镜头230未被遮挡,即不存在高温光灼烧遮挡物的安全风险,仅存在因距离传感器252被遮挡导致的投影设备无法精准自校正的问题,因此只需控制投影设备提示移除遮挡物,以及使遮挡状态标志位记录有遮挡物状态。
在一些实施例中,控制器若判定产生遮挡场景三,该场景下镜头230和距离传感器252均被遮挡,不仅影响投影设备的自动校正,镜头230发出的高温光也会灼烧遮挡物,因此控制器控制投影设备提示移除遮挡物,使遮挡状态标志位记录有遮挡物状态,以及执行对投影设备的关屏保护操作,即控制投影组件暂停向投影介质1投射媒体数据,使镜头230不再发出投影光,进而避免移除遮挡物前投影光灼烧遮挡物。
在一些实施例中,控制器可每间隔预设时长,控制距离传感器252测量一次距离检测值,检测完成后立即关闭距离传感器252,以降低投影设备的系统资源消耗。基于前一实施例,控制器若检测到投影设备前无遮挡物,即用户已移除遮挡物,则执行对投影设备的开屏操作,即恢复投影组件向投影介质1投射媒体数据,并将遮挡状态标志位的记录变更为无遮挡物状态。
在一些实施例中,对于遮挡场景一,遮挡物仅覆盖第一平面的第一位置(图10第二视角下为第一平面的左侧),即遮挡镜头230,而未遮挡距离传感器252,使得距离传感器252发射的信号直达投影介质1后才被发射回来,导致距离传感器252测量的距离检测值大于距离阈值;对于遮挡场景四,投影设备前无遮挡物,不会遮挡镜头230和距离传感器252,同样使得距离传感器252发射的信号直达投影介质1后才被发射回来,而非被中途的遮挡物发射,因此该场景下距离检测值大于距离阈值。
在一些实施例中,若距离检测值大于距离阈值,则可能是产生遮挡场景一或遮挡场景四,为判别具体为哪种类型的遮挡场景,控制器可启动第一相机241和第二相机251,获取第一相机241采集的第一图像,获取第二相机251采集的第二图像,并比较第一图像和第二图像。
对于遮挡场景一,左侧的镜头230和第一相机241被遮挡,而右侧的第二相机251和距离传感器252未被遮挡,因此第二图像是正常采集的投影画面图像,由于第一相机241的镜头与遮挡物距离较近,导致第一相机241的镜头光折射出来的画面与第二图像不同,例如遮挡物为白色,则第二图像呈现为纯白图像,此场景下第一图像和第二图像的相似性较低。
对于遮挡场景四,由于第一平面之前无任何遮挡物,镜头230、第一相机241、第二相机251和距离传感器252均未被遮挡,第一相机241和第二相机251采集的是同一投影介质1上呈现的投影画面的 图像,因此第一图像和第二图像的相似性较高。
在一些实施例中,在距离检测值大于距离阈值时,控制器进一步计算第一图像与第二图像的相似度,并比较该相似度与预设的相似度阈值。若相似度大于或等于相似度阈值,即认为第一图像和第二图像的相似性较高,判定当前为遮挡场景四;若相似度小于相似度阈值,即认为第一图像与第二图像的相似性较低,判定当前为遮挡场景一。
在一些实施例中,控制器若判定产生遮挡场景四,由于该场景下投影设备前无遮挡物,既不存在投影光对遮挡物的灼伤问题,也不存在遮挡物干扰投影设备自动校正,因此投影设备无需提示移除遮挡物的信息,使遮挡状态标志位记录无遮挡物状态,投影设备可正常运行。
在一些实施例中,控制器若判定产生遮挡场景一,该场景下镜头230被遮挡,镜头230发出的高温投影光可能也会灼烧遮挡物,因此控制器控制投影设备提示移除遮挡物,使遮挡状态标志位记录有遮挡物状态,以及执行对投影设备的关屏保护操作,即控制投影组件暂停向投影介质1投射媒体数据,使镜头230停止发出投影光,进而避免移除遮挡物前投影光灼烧遮挡物。
在一些实施例中,控制器检测到投影设备前无遮挡物,即判定满足遮挡场景四时,查询遮挡状态标志位记录的状态值;若遮挡状态标志位当前指示为有遮挡物状态,并且投影设备处于关屏保护状态,则表明用户是在遮挡场景一或遮挡场景三下移除遮挡物,则执行对投影设备的开屏操作,使投影组件恢复向投影介质1投射媒体数据,以及将遮挡状态标志位的记录变更为无遮挡物状态;若遮挡状态标志位当前指示为无遮挡物状态,并且投影设备处于开屏状态,则说明投影设备前一直没有遮挡物,保持投影设备的正常运行状态;若遮挡状态标志位当前指示为有遮挡物状态,并且投影设备处于开屏状态,则表明用户是在遮挡场景二下移除遮挡物,则将遮挡状态标志位的记录变更为无遮挡物状态,保持投影设备的正常运行状态。
在一些实施例中,可利用如余弦距离、统计直方图等算法,比较第一图像与第二图像的相似性。其中,余弦距离就是比较第一图像和第二图像的相似程度,统计直方图算法则是分析图像特征,从而确定哪一或哪些图像采集装置被遮挡。
在一些实施例中,无论遮挡物覆盖第一平面的哪一侧或两侧,即对于遮挡场景一、遮挡场景二和遮挡场景三中的任意场景,只要检测到投影设备前方遮挡物,都可实施移除遮挡物的提示,以及对投影设备进行关屏保护。
在一些实施例中,图14示例另一种投影设备的第一平面结构,在第一平面上设置有第一测量组件和第二测量组件250,第一测量组件包括第一相机241,第二测量组件250包括第二相机251和距离传感器252,与图9的区别在于,投影组件中的镜头230与第二测量组件250在第一平面上处于同侧,即镜头230、第二相机251和距离传感器252均位于第一平面的第二位置(右侧区域)且位置相互靠近,第一相机241则单独位于第一平面的第一位置(左侧区域)。第一测量组件和第二测量组件250位于第一平面的哪一侧不影响本方案的实质。
在一些实施例中,基于图14示例的投影设备结构,可包括如下四种遮挡场景:
遮挡场景五:参照图15的示例,遮挡物仅覆盖第一平面的右侧,即镜头230、第二相机251和距离传感器252均被遮挡,第一相机241未被遮挡,并且距离检测值小于或等于距离阈值。在遮挡场景五中,镜头230与遮挡物的间距未超过距离阈值,容易导致遮挡物被投影光高温灼伤,因此控制器控制投影设备提示移除遮挡物,使遮挡状态标志位记录有遮挡物状态,以及执行对投影设备的关屏保护操作,即控制投影组件暂停向投影介质1投射媒体数据,使镜头230停止发出投影光,进而避免移除遮挡物前投影光灼烧遮挡物。
遮挡场景六:参照图16的示例,遮挡物仅覆盖第一平面的右侧,即镜头230、第二相机251和距 离传感器252均被遮挡,第一相机241未被遮挡,假设距离检测值为d,该场景满足距离阈值<d<投影距离L,由于镜头230与遮挡物的间距超过距离阈值,不具备遮挡物被投影光高温灼伤的风险,但遮挡物遮挡住距离传感器252,会导致投影设备无法自动校正及调焦,因此控制器控制投影设备提示移除遮挡物,使遮挡状态标志位记录有遮挡物状态,但仍可保持投影设备的开屏状态。其中,在不移动用于投影的装置2和投影介质1的情况下,投影距离L保持为固定值。
在d等于投影距离L时,包括两种可能的场景,参照图17示例的遮挡场景七,遮挡物仅覆盖第一平面的左侧,即仅第一相机241被遮挡,镜头230、第二相机251和距离传感器252均未被遮挡;参照图18示例的遮挡场景八,在投影光的投射路径上无任何遮挡物。遮挡场景七和遮挡场景八中,第一平面右侧的镜头230、距离传感器252和第二相机251均未被遮挡,既不影响投影设备的自动校正及调焦,也不存在投影光高温灼伤遮挡物的风险,因此距离检测值d等于投影距离L时,皆认为投影设备前无遮挡物,控制器无需提示移除遮挡物,使遮挡状态标志位记录无遮挡物状态。
图14示例的投影设备是双目相机的结构,作为图14的变型,投影设备也可为单目相机结构,例如取消第一相机241,使第一平面仅设置单独一个测量组件,该测量组件包括一个图像采集装置和距离传感器,该测量组件可设置于第一平面上的任意位置,镜头230与该测量组件位置临近,其遮挡物检测机制与图14示例的双目相机结构基本一致,此处不再赘述。图14示例的投影设备,其遮挡物检测机制主要依靠距离传感器252测量的距离检测值,而无需对双摄像头采集的图像进行相似性分析。对于遮挡场景五或遮挡场景六,只要距离检测值d小于投影距离L,都可实施移除遮挡物的提示,以及对投影设备进行关屏保护。
本申请中图10、图11、图12、图13、图15、图16、图17和图18呈现的是第一视角,图9和图14呈现的是第二视角,第一视角和第二视角是相反的,在两种视角下第一位置和第二位置在第一平面上所对应的侧是相反的,其中第一视角是投影设备使用时的实际视角,因此本申请是以第一视角来定义第一位置和第二位置在第一平面上的相对方向。
在一些实施例中,图19提供第一种投影检测方法,所述方法由投影设备的控制器执行,包括如下步骤:
步骤S1901,响应于投影设备的开机指令,启动所述投影设备。
步骤S1902,根据投影设备上的镜头、距离传感器和图像采集装置在第一平面上的位置关系,以及所述距离传感器的距离检测值,检测投影设备与投影面之间是否存在遮挡物。
其中,所述第一平面为投影时投影设备上与所述投影面平行的平面,投影面用于接收并显示光机投射的投影内容。
步骤S1903,若检测到所述投影设备与投影面之间存在遮挡物,控制发出用于提示移除遮挡物的提示信息。
该实施例中,未限定镜头230、距离传感器252和图像采集装置在投影设备的第一平面上的分布和位置关系,也未限定图像采集装置包括一个或多个相机,因此基于投影设备的第一平面上相关元件的位置关系,结合距离传感器252的距离检测值,来检测投影设备的遮挡状态,并在具有遮挡物时,及时提示用户移除遮挡物,以规避遮挡物被投影光灼烧,以及重要元件被遮挡而导致的投影设备工作异常问题,在保证安全性的同时,提升投影内容的显示效果。该实施例是基于投影设备的结构特点,实现遮挡物检测及提供应对措施。
在一些实施例中,基于图9示例的投影设备,图20提供第二种投影检测方法,所述方法由投影设备的控制器执行,包括如下步骤:
步骤S2001,响应于投影设备的开机指令,启动所述投影设备。
步骤S2002,控制距离传感器计算距离检测值。
步骤S2003,判断距离检测值是否大于距离阈值。若距离检测值小于或等于距离阈值,执行步骤S2004;否则执行步骤S2006~步骤S2008。
步骤S2004,判断投影设备当前是否处于有遮挡物状态。若投影设备当前处于无遮挡物状态,即投影设备前由无遮挡物变为出现遮挡物,则执行步骤S2005;若投影设备当前处于有遮挡物状态,即投影设备的前方一直具有遮挡物,执行步骤S2002,定时测量距离检测值,以检测遮挡物是否被移除。
步骤S2005,记录所述投影设备处于有遮挡物状态,提示移除遮挡物,以及控制光机停止投射投影内容至投影面。
步骤S2006,控制第一相机采集第一图像以及控制第二相机采集第二图像。
步骤S2007,计算所述第一图像与所述第二图像的相似度。
步骤S2008,判断所述相似度是否小于相似度阈值。若第一图像与第二图像的相似度小于相似度阈值,说明存在遮挡物,则执行步骤S2004;若第一图像与第二图像的相似度大于或等于相似度阈值,说明不存在遮挡物,则执行步骤S2009。
步骤S2009,判断投影设备当前是否处于无遮挡物状态。若投影设备当前处于有遮挡物状态,表明用户已移除遮挡物,则执行步骤S2010;若投影设备当前处于无遮挡物状态,即投影设备的前方一直没有遮挡物,执行步骤S2002,定时测量距离检测值,以检测是否出现遮挡物。
步骤S2010,记录所述投影设备处于无遮挡物状态,控制所述光机恢复投射投影内容至投影面。
该方法实施例中,利用距离传感器与双图像采集装置结合,对遮挡场景的类型进行识别,若存在遮挡物,则投影设备及时提示用户移除遮挡物,以及对投影设备进行关屏保护,提升投影设备的使用安全性,避免遮挡物被投影光灼烧,避免遮挡物对投影画面显示的影响,以及规避因距离传感器等关键元件被遮挡时导致的投影设备无法自动校正及调焦的异常问题;若用户在接收到提示后移除遮挡物,投影设备会检测到遮挡物不存在,从而开屏并恢复向投影介质投射媒体数据。
在一些实施例中,基于图14示例的投影设备,图21提供第三种投影检测方法,所述方法由投影设备的控制器执行,包括如下步骤:
步骤S2101,响应于投影设备的开机指令,启动所述投影设备。
步骤S2102,控制距离传感器计算距离检测值。
步骤S2103,判断所述距离检测值是否大于距离阈值。若距离检测值大于距离阈值,执行步骤S2104;若距离检测值小于或等于距离阈值,否则执行步骤S2107~步骤S2109。
步骤S2104,判断所述距离检测值是否等于投影距离。
所述投影距离为镜头/第一平面与投影介质之间的距离,若距离检测值大于距离阈值,并且距离检测值小于投影距离,则执行步骤S2105;若距离检测值等于投影距离,则执行步骤S2106。
步骤S2105,确定所述投影设备处于有遮挡物状态,提示移除遮挡物。
步骤S2106,确定所述投影设备处于无遮挡物状态,不提示移除遮挡物。
步骤S2107,确定所述投影设备处于有遮挡物状态,提示移除遮挡物,控制光机停止投射投影内容至投影面。
步骤S2108,每间隔预设时长,重新检测所述投影设备的遮挡状态。
步骤S2109,若检测到所述投影设备的遮挡状态变更为无遮挡物状态,控制所述光机投射投影内容至投影面。
基于图14示例的投影设备结构,根据距离阈值和投影距离,划分为三个距离区间,在(0,距离阈值]对应的区间A内,由于遮挡物距离镜头较近,会导致投影光易灼伤遮挡物,因此在距离检测值位于 区间A内时,不仅要进行移除遮挡物的提示,还要对投影设备进行关屏保护;在(距离阈值,投影距离)对应的区间B内,由于遮挡物位于安全范围内,不会被投影光灼伤,因此只需提示移除遮挡物,可以不关屏;在距离检测值等于投影距离时,投影光可直达投影介质,使投影介质显示投射的媒体数据,期间不会遇到遮挡物,因此无需提示移除遮挡物。该实施例可根据距离检测值所处区间,检测是否存在遮挡物,以及确定对遮挡物的响应措施。
基于本申请上述的示例,可以对镜头230、距离传感器和至少一个图像采集装置在第一平面上的位置分布特征进行设置,并配置对应的遮挡物检测及应对机制,因而不局限于本申请示例的实现方式。另外,投影设备的软硬件配置和功能不限定,本申请方案适用于不同类型的投影设备,包括具有长焦微投特性的投影仪等。本申请中所述的投影介质是指被投射且用于显示投影画面的载体,例如墙体、固定或活动的幕布,或者具有显示能力的电子设备,例如电脑等。
在一些实施例中,本申请提供的投影设备及避障投影检测方法还可实现防射眼功能,在检测到用户进入射出激光轨迹范围内时,开启防射眼开关并提醒用户离开当前区域,控制器还可控制用户界面降低显示亮度,以防止激光对用户视力造成伤害。
图22示出了本申请另一实施例投影设备实现放射眼功能的信令交互时序示意图。
在一些实施例中,投影设备被配置为儿童观影模式时,控制器将自动开启防射眼开关。
在一些实施例中,控制器接收到陀螺仪传感器发送的位置移动数据后、或接收到其它传感器所采集的异物入侵数据后,控制器将控制投影设备开启防射眼开关。
在一些实施例中,在飞行时间(TOF)传感器、摄像头设备等设备所采集数据触发预设的任一阈值条件时,控制器将控制用户界面降低显示亮度、显示提示信息、降低光机发射功率、亮度、强度,以实现对用户视力的保护。
在一些实施例中,投影设备控制器可控制校正服务向飞行时间传感器发送信令,步骤S2201,查询投影设备当前设备状态,然后控制器接受来自飞行时间传感器的数据反馈。
步骤S2202,校正服务可向进程通信框架(HSP Core)发送通知算法服务启动防射眼流程信令;
步骤S2203,进程通信框架(HSP Core)将从算法库进行服务能力调用,以调取对应算法服务,例如可包括拍照检测算法、截图画面算法、以及异物检测算法等;
步骤S2204,进程通信框架(HSP Core)基于上述算法服务返回异物检测结果至校正服务;针对返回结果,若达到预设阈值条件,控制器将控制用户界面显示提示信息、降低显示亮度,其信令时序如图22所示。
在一些实施例中,投影设备防射眼开关在开启状态下,用户进入预设的特定区域时,投影设备将自动降低光机发出激光强度、降低用户界面显示亮度、显示安全提示信息。投影设备对上述防射眼功能的控制,可通过以下方法实现:
控制器基于相机获取的投影画面,利用边缘检测算法识别投影设备的投影区域;在投影区域显示为矩形、或类矩形时,控制器通过预设算法获取上述矩形投影区域四个顶点的坐标值;
在实现对于投影区域内的异物检测时,可使用透视变换方法校正投影区域为矩形,计算矩形和投影截图的差值,以实现判断显示区域内是否有异物;若判断结果为存在异物,投影设备自动触发防射眼功能启动。
在实现对投影范围外一定区域的异物检测时,可将当前帧的相机内容、和上一帧的相机内容做差值,以判断投影范围外区域是否有异物进入;若判断有异物进入,投影设备自动触发防射眼功能。
于此同时,投影设备还可利用飞行时间(ToF)相机、或飞行时间传感器检测特定区域的实时深度变化;若深度值变化超过预设阈值,投影设备将自动触发防射眼功能。
在一些实施例中,如图27所示,给出投影设备实现防射眼算法的流程示意图,投影设备基于采集的飞行时间数据、截图数据、以及相机数据分析判断是否需要开启防射眼功能。投影设备实现防射眼算法的方式有以下三种:
方式一:
S2700-1,采集飞行时间(TOF)数据;
S2700-2,根据采集的飞行时间数据,控制器做深度差值分析;
S2700-3,判断深度差值是否大于预设阈值X,若深度差值大于预设阈值X,X实施为0时,执行S2703;
如果深度差值大于预设阈值X,当预设阈值X实施为0时,则可判定有异物已处于投影设备的特定区域。若用户位于所述特定区域,其视力存在被激光损害风险,投影设备将自动启动防射眼功能,以降低光机发出激光强度、降低用户界面显示亮度、并显示安全提示信息。
方式二:
S2701-1,采集截图数据;
S2701-2,根据采集的截图数据做加色模式(RGB)差值分析;
S2701-3,判断RGB差值是否大于预设阈值Y,若RGB差值大于预设阈值Y,执行S2703;
S2703,画面变暗,弹出提示。
投影设备根据已采集截图数据做加色模式(RGB)差值分析,如所述色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域;所述特定区域内若存在用户,其视力存在被激光损害风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息。
方式三:
S2702-1,采集相机数据;
S2702-2,根据采集的相机数据获取投影坐标,若获取的投影坐标处于投影区域,执行S2701-3,若获取的投影坐标处于扩展区域,仍执行S2701-3;
S2702-3,根据采集的相机数据做加色模式(RGB)差值分析;
S2702-4,判断RGB差值是否大于预设阈值Y,若RGB差值大于预设阈值Y,执行S2703。
S2703,画面变暗,弹出提示。
投影设备根据已采集相机数据获取投影坐标,然后根据所述投影坐标确定投影设备的投影区域,进一步在投影区域内进行加色模式(RGB)差值分析,如果色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域,所述特定区域内若存在用户,其视力存在被激光损害的风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息。
若获取的投影坐标处于扩展区域,控制器仍可在所述扩展区域进行加色模式(RGB)差值分析;如果色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域,所述特定区域内若存在用户,其视力存在被投影设备发出激光损害的风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息,如图27所示。
图23示出了本申请另一实施例投影设备实现显示画面校正功能的信令交互时序示意图。
在一些实施例中,通常情况下,投影设备可通过陀螺仪、或陀螺仪传感器对设备移动进行监测。步骤S2301,校正服务向陀螺仪发出用于查询设备状态的信令,并接收陀螺仪反馈用于判定设备是否发生移动的信令。
在一些实施例中,投影设备的显示校正策略可配置为,在陀螺仪、飞行时间传感器同时发生变化时, 投影设备优先触发梯形校正;在陀螺仪数据稳定预设时间长度后,步骤S2302,通知算法服务启动梯形校正流程;控制器启动触发梯形校正;并且控制器还可将投影设备配置为在梯形校正进行时不响应遥控器按键发出的指令;为了配合梯形校正的实现,投影设备将打出纯白图卡。
其中,梯形校正算法可基于双目相机构建世界坐标系下的投影面与光机坐标系转换矩阵;进一步结合光机内参计算投影画面与播放图卡的单应性,并利用该单应性实现投影画面与播放图卡间的任意形状转换。
在一些实施例中,校正服务发送用于通知算法服务启动梯形校正流程的信令至进程通信框架(HSP CORE),所述进程通信框架进一步发送服务能力调用信令至算法服务,以获取能力对应的算法;
算法服务获取执行拍照和画面算法处理服务、避障算法服务,并将其以信令携带的方式发送至进程通信框架;在一些实施例中,进程通信框架执行上述算法,并将执行结果反馈给校正服务,所述执行结果可包括拍照成功、以及避障成功。
在一些实施例中,投影设备执行上述算法、或数据传送过程中,若出现错误校正服务将控制用户界面显示出错返回提示,并控制用户界面再次打出梯形校正、自动对焦图卡。
通过自动避障算法,投影设备可识别幕布;并利用投影变化,将投影画面校正至幕布内显示,实现与幕布边沿对齐的效果。
通过自动对焦算法,投影设备可利用飞行时间(ToF)传感器获取光机与投影面距离,基于所述距离在预设的映射表中查找最佳像距,并利用图像算法评价投影画面清晰程度,以此为依据实现微调像距。
在一些实施例中,步骤S2303,校正服务发送至进程通信框架的自动梯形校正信令可包含其他功能配置指令,例如可包含是否实现同步避障、是否入幕等控制指令。
进程通信框架发送服务能力调用信令至算法服务,使算法服务获取执行自动对焦算法,实现调节设备与幕布之间的视距;在一些实施例中,在应用自动对焦算法实现对应功能后,算法服务还可获取执行自动入幕算法,所述过程中可包含梯形校正算法。
在一些实施例中,投影设备通过执行自动入幕,算法服务可设置投影设备与幕布之间的8位置坐标;然后再次通过自动对焦算法,实现投影设备与幕布的视距调节;最终,将校正结果反馈至校正服务,步骤S2304,控制用户界面显示校正结果,如图23所示。
在一些实施例中,投影设备通过自动对焦算法,利用其配置的激光测距可获得当前物距,以计算初始焦距、及搜索范围;然后投影设备驱动相机(Camera)进行拍照,并利用对应算法进行清晰度评价。
投影设备在上述搜索范围内,基于搜索算法查找可能的最佳焦距,然后重复上述拍照、清晰度评价步骤,最终通过清晰度对比找到最优焦距,完成自动对焦。
例如,在投影设备启动后,实现自动对焦算法的步骤如图24所示,包括以下步骤:
S2401,用户移动设备,投影设备自动完成校正后重新对焦;
S2402,控制器将检测自动对焦功能是否开启,若否即当自动对焦功能未开启时,控制器将结束自动对焦业务,若是则执行S2403;
S2403,中间件获取飞行时间(TOF)的检测距离;
当自动对焦功能开启时,投影设备将通过中间件获取飞行时间(TOF)传感器的检测距离进行计算;
S2404,根据距离查询映射表获取大致焦距;
S2405,中间件设置该焦距给光机;控制器根据获取的距离查询预设的映射表,以获取投影设备的大致焦距;然后中间件将获取焦距设置到投影设备的光机。
S2406,摄像头拍照;
S2407,根据评价函数,判断对焦是否完成,若是则结束自动对焦流程,否则执行S2408;
S2408,中间件微调焦距(步长),再次执行S2405。
光机以上述焦距进行发出激光后,摄像头将执行拍照指令;控制器根据获取的拍照结果、评价函数,判断投影设备对焦是否完成;如果判定结果符合预设完成条件,则控制自动对焦流程结束;
如果判定结果不符合预设完成条件,中间件将微调投影设备光机的焦距参数,例如可以预设步长逐渐微调焦距,并将调整的焦距参数再次设置到光机;从而实现反复拍照、清晰度评价步骤,最终通过清晰度对比找到最优焦距完成自动对焦,如图24所示。
在一些实施例中,本申请提供的投影设备可通过梯形校正算法实现显示校正功能。
首先基于标定算法,可获取两相机之间、相机与光机之间的两组外参,即旋转、平移矩阵;然后通过投影设备的光机播放特定棋盘格图卡,并计算投影棋盘格角点深度值,例如通过双目相机之间的平移关系、及相似三角形原理求解xyz坐标值;之后再基于所述xyz拟合出投影面、并求得其与相机坐标系的旋转关系与平移关系,具体可包括俯仰关系(Pitch)和偏航关系(Yaw)。
通过投影设备配置的陀螺仪可得到卷(Roll)参数值,以组合出完整旋转矩阵,最终计算求得世界坐标系下投影面到光机坐标系的外参。
结合上述步骤中计算获取的相机与光机的R、T值,可以得出投影面世界坐标系与光机坐标系的转换关系;结合光机内参,可以组成投影面的点到光机图卡点的单应性矩阵。
最终在投影面选择矩形,利用单应性反求光机图卡对应的坐标,该坐标就是校正坐标,将其设置到光机,即可实现梯形校正。
如图25所示,给出投影设备实现梯形校正、避障算法的流程示意图包括以下步骤:S2501,投影设备控制器获取照片像素点对应点的深度值,或投影点在相机坐标系下的坐标;
S2502,通过深度值,中间件获取光机坐标系与相机坐标系关系;
S2503,然后控制器计算得到投影点在光机坐标系下的坐标值:
S2504,基于坐标值拟合平面获取投影面与光机的夹角:
S2505,根据夹角关系获取投影点在投影面的世界坐标系中的对应坐标;
S2506,根据图卡在光机坐标系下的坐标与投影平面投影面对应点的坐标,可计算得到单应性矩阵:
S2507,控制器基于上述已获取数据判定障碍物是否存在,若是执行S2508,否则执行S2509;
S2508,障碍物存在时,在世界坐标系下的投影面上任取矩形坐标,根据单应性关系计算出光机要投射的区域;
S2509,障碍物不存在时,控制器例如可获取二维码特征点:
S2510,获取二维码在预制图卡的坐标:
S2511,获取相机照片与图纸图卡单应性关系:
S2512,将获取的障碍物坐标转换到图卡中,获取障碍物遮挡图卡坐标:
S2513,依据障碍物图卡遮挡区域在光机坐标系下坐标,通过单应性矩阵转换得到投影面的遮挡区域坐标:
S2514,在世界坐标系下投影面上任取矩形坐标,同时避开障碍物,根据单应性关系求出光机要投射的区域。
可以理解,避障算法在梯形校正算法流程选择矩形步骤时,利用算法(OpenCV)库完成异物轮廓提取,选择矩形时避开该障碍物,以实现投影避障功能。
在一些实施例中,如图26所示,为投影设备实现入幕算法的流程示意图,包括以下步骤:
S2601,中间件获取相机拍到的二维码图卡:
S2602,识别二维码特征点,获取在相机坐标系下的坐标;
S2603,控制器进一步获取预置图卡在光机坐标系下的坐标:
S2604,以求解相机平面与光机平面的单应性关系;
S2605,控制器基于上述单应性关系,识别相机拍到的幕布四个顶点坐标:
S2606,根据单应性矩阵获取投影到幕布光机要投射图卡的范围。
可以理解,在一些实施例中,入幕算法基于算法库(OpenCV),可识别最大黑色闭合矩形轮廓并提取,判断是否为16:9尺寸;投影特定图卡并使用相机拍摄照片,提取照片中多个角点用于计算投影面(幕布)与光机播放图卡的单应性,将幕布四顶点通过单应性转换至光机像素坐标系,将光机图卡转换至幕布四顶点即可完成计算比对。
长焦微投电视具有灵活移动的特点,每次位移后投影画面可能会出现失真,另外如投影面存在异物遮挡、或投影画面从幕布异常时,本申请提供的投影设备及避障投影检测方法,基于几何校正的显示控制方法,可针对上述问题自动完成校正,包括实现自动梯形校正、自动入幕、自动避障、自动对焦、防射眼等功能的。
在一些实施例中,本申请提供的投影设备及避障投影方法在识别到异物后,将投影图像投射到非异物区域,实现避障投影。
在一些实施例中,通过使用自动避障算法,用于投影的装置2可以进行障碍物检测,进而可识别幕布。并利用投影变化,将投影图像校正至幕布内显示,实现与幕布边沿对齐的效果。
然而,如果用于投影的装置2不具备避障功能时,就会出现障碍物检测失败的情况。如果用于投影的装置2具备避障功能,由于进行障碍物检测的过程中受环境变化影响较大,例如,当投影区域中存在光斑(亮斑和/或暗斑)时,投影设备可能会将光斑误识别为障碍物,导致检测结果不稳定,使得障碍物检测后投影区域面积小,不满足用户的投影需求,降低用户的使用体验。
为此,本申请一些实施例中提出了一种用于投影的装置2,所述用于投影的装置2可以包括光机220、相机700和控制器500。其中,光机220用于将播放内容投射至投影面中的投影区域,投影面可以是墙面或者幕布。相机700用于拍摄投影面中的投影图像。以解决用户在使用用于投影的装置2的移动过程,用于投影的装置2出现障碍物检测失败或障碍物检测后投影区域面积小的问题。
以下结合图28对本申请一些实施例提供的避障投影的过程进行进一步阐述。
在一些实施例中,图28示出了本申请实施例中投影设备进行避障投影的流程示意图。如图28所示,当用户输入投影指令,用于投影的装置2的控制器响应于接收到的投影指令,可以获取投影图像并自动对投影图像进行障碍物检测,并通过障碍物检测结果确定投影区域中不存在障碍物后投射投影图像,从而实现自动避障功能。也就是说,如果投影区域中存在障碍物,用于投影的装置2在执行避障过程之前的投影区域与完成避障过程的投影区域是不同的。具体可以设置为,用于投影的装置2接收到投影指令,响应于接收到的投影指令,开启自动避障功能。投影指令是指用于触发用于投影的装置2自动进行避障过程的控制指令。
在一些实施例中,投影指令可以是用户主动输入的指令。例如,在接通用于投影的装置2的电源后,用于投影的装置2可以在投影面中的投影区域上投射出图像。此时,用户可以按下用于投影的装置2中预先设置的自动避障开关,或用于投影的装置2配套遥控器上的自动避障按键,使用于投影的装置2开启自动避障功能,自动对投影区域进行障碍物检测。
在一些实施例中,控制器响应于投影指令,控制光机220投射白色图卡至投影面中的投影区域。在投射白色图卡之后,控制相机700拍摄投影面图像。由于相机700拍摄的投影面图像的图像区域大于投影区域的图像区域。因此,为了获取投影区域的图像即投影图像,控制器可以基于相机700拍摄的投影图像,计算得到投影区域的四个角点与四个边缘中点在光机220坐标系下的坐标值。并基于坐标值拟合 平面获取投影面与光机220的夹角关系。根据夹角关系获取四个角点与四个边缘中点在投影面的世界坐标系中的对应坐标。获取白色图卡在光机坐标系下的坐标与投影面对应点的坐标,可计算得到单应性矩阵。最后,通过单应性矩阵将投影区域的四个角点与四个边缘中点在光机坐标系下的坐标值转换为对应在相机坐标系下的坐标值。以根据四个角点与四个边缘中点在相机坐标系下的坐标值确定投影区域在投影面图像中的位置和区域面积。
在一些实施例中,控制器在对投影图像执行障碍物的轮廓检测的过程中,基于投影图像采用图像轮廓检测算法得到多轮廓区域信息。其中,多轮廓区域信息包括障碍物轮廓坐标集。障碍物轮廓坐标集用于表征多个障碍物轮廓坐标构成的集合。并根据障碍物轮廓坐标集获取障碍物集合,障碍物集合包括至少一个障碍物以及对应的轮廓层级;轮廓层级用于表征障碍物之间的外包或内嵌关系。需要说明的是,在执行障碍物的轮廓检测之前,控制器需将投影面图像的四边坐标剔除,以防止投影面图像的四边坐标对轮廓检测产生影响。
在一些实施例中,障碍物对应的轮廓层级可以用轮廓参数表示。例如,轮廓参数包括后一个轮廓、前一个轮廓、子轮廓、父轮廓的索引编号。如果障碍物的轮廓参数中没有对应的索引编号,则将索引编号赋值为负数(如用-1表示)。
下面示例性对轮廓参数进行解释。
如果轮廓A包含轮廓B、轮廓C、轮廓D,则轮廓A即为父轮廓;轮廓B、轮廓C、轮廓D均为轮廓A的子轮廓。如果轮廓C的轮廓位置在轮廓B的顶部位置,则轮廓C为轮廓B的前一轮廓,同理,轮廓B为轮廓C的后一轮廓。
图29示出了本申请实施例中障碍物集合以及轮廓层级的示意图。参见图29,示例性的,障碍物集合包括轮廓1、轮廓2、轮廓2a、轮廓3和轮廓4五个闭合轮廓。其中,轮廓1、轮廓2是最外层轮廓,即为同一等级关系,设为0级。轮廓2a是轮廓2的子轮廓,即轮廓2a为一个等级,设为1级。轮廓3和轮廓4是轮廓2a的子轮廓,即轮廓3、轮廓4处于一个等级,设为2级。因此,对于轮廓2的轮廓参数表征为[-1,1,2a,-1]。
在一些实施例中,控制器根据轮廓层级对障碍物集合进行筛选,可以得到障碍物集;其中,障碍物集包括至少一个轮廓层级为最外层的障碍物。也就是说,如果多个障碍物之间的轮廓关系存在外包或内嵌关系,只需提取出最外层轮廓对应的障碍物即可。目的是在实现避障功能的过程中,如果规避了最外层轮廓对应的障碍物,即使存在相对于最外层轮廓对应内嵌轮廓的障碍物也会同样被规避。示例性的,继续参见图29,从轮廓1、轮廓2、轮廓2a、轮廓3和轮廓4五个闭合轮廓中筛选等级为0的轮廓,即为最外层轮廓。进而,根据最外层轮廓生成障碍物集。其中,障碍物集包括轮廓1和轮廓2。
在一些实施例中,参见图30,控制器可以根据投影图像的图像面积对障碍物集进行更新,以根据更新后的障碍物集确定非障碍物区域。其中,确定非障碍物区域具体包括,S3001:控制器在根据投影图像的图像面积更新障碍物集时,可以获取障碍物集中每个障碍物对应的中心坐标、宽度和高度。根据中心坐标、宽度和高度计算得到障碍物对应的障碍物面积;S3002:如果障碍物面积小于预设的面积阈值,则删除障碍物集中的该障碍物面积小于预设的面积阈值的障碍物;S3003:控制器根据更新后的障碍物集,更新障碍物轮廓坐标集,更新后的障碍物轮廓坐标集中包括更新后的障碍物集中的各障碍物对应的轮廓坐标。
示例性的,障碍物集包括轮廓1和轮廓2。根据轮廓1和轮廓2对应的中心坐标、宽度和高度计算得到轮廓1对应的轮廓1区域面积以及轮廓2对应的轮廓2区域面积。例如:轮廓1区域面积占用5个像素点,轮廓2区域面积占用30个像素点,面积阈值为25个像素点。可见,轮廓1对应的区域面积小于面积阈值。此时将障碍物集中的轮廓1删除,以完成对障碍物集的更新。
在一些实施例中,控制器在对投影图像执行障碍物的轮廓检测时,可以将投影图像进行灰度处理,得到灰度图像,利用边缘检测算法提取灰度图像中的边缘坐标,并对边缘坐标执行去除噪声处理,得到去除噪声后的边缘坐标。利用阈值二值化算法分割去除噪声后的图像,即基于灰度图像中颜色值大于所述颜色阈值的像素点,生成前景图像,基于灰度图像中颜色值小于或等于颜色阈值的像素点,生成后景图像,其中,像素点的颜色值是用于表征像素点特征的综合属性,像素点的颜色值基于像素点的RGB值、亮度、灰度等计算得到,障碍物对应的图像分布在前景图像上,后景图像是投影图像的背景画面,因此,可以根据前景图像执行所述障碍物目标和所述光斑目标的轮廓检测。
在一些实施例中,控制器在控制执行去除噪声处理的过程中,首先对边缘坐标进行膨胀算法操作。即依次读取边缘坐标中的像素点坐标并设置结构元素和卷积核阈值,其中,结构元素为3×3的结构元素如卷积核。将全部像素点坐标与卷积核进行卷积计算,得到第一卷积结果。如果第一卷积结果大于卷积阈值,则将该像素点设置为1,反之为0。这样,在使用该卷积核依次遍历图像中的像素点时,若卷积核中出现数值为1时,即将边缘坐标中对应的卷积核原点位置的像素点赋值为1,反之为0。因此,通过膨胀算法可以将纤细的图像边缘部分完成闭合。
需要说明的是,结构元素可以是3×3、5×5等不同尺寸比例的结构图。本申请仅以3×3的结构图,以及将像素点赋值为0或1作为示例。可根据具体的计算逻辑和算法参数自行设置结构元素以及对像素点赋值。
控制器可以控制对膨胀后的图像进行腐蚀算法操作。具体为:将膨胀后的像素点坐标与卷积核进行卷积计算,得到第二卷积结果。当第二卷积结果中的像素点均为1时,则令膨胀后的像素点为1,反之为0。进而完成去除膨胀后像素点坐标中的噪声污点。同时,可以在平滑较大物体的边界的同时不会明细改变较大物体的面积。
在一些实施例中,参见图31,由于环境光的影响在投影面中会形成的光斑,控制器在对投影图像执行障碍物的轮廓检测的过程中会出现将光斑误检测为障碍物,因此,生成的障碍物集不但包括位于光机和投影面之间的,会对投影面造成遮挡的障碍物,还包括投影面中的光斑,对应的障碍物轮廓坐标集不但包括障碍物轮廓坐标,还包括光斑轮廓坐标集,从而造成投影区域面积过小的问题,因此,为了避免投影面中的光斑对障碍物轮廓检测的影响,可以根据障碍物轮廓坐标集和光斑轮廓坐标集,获取光斑目标相对于障碍物目标的重合度;并基于重合度大于预设的重合阈值,删除障碍物轮廓坐标集中的障碍物目标对应的障碍物轮廓坐标,以根据更新后的障碍物轮廓坐标确定投影区域。
在一些实施例中,光斑包括明斑,明斑是由于光线的折射在投影面中形成的,呈现为发光样式,光斑轮廓坐标集包括明斑轮廓坐标,由于明斑的亮度通常大于一定的数值,因此,控制器可以基于前景图像中各像素点的颜色值识别明斑。
控制器在对投影图像执行光斑目标的轮廓检测时,可以获取转换为灰度图像的前景图像,遍历前景图像中每一个像素点的颜色值,将各像素点的颜色值与预设的亮度阈值对比,基于前景图像中颜色值大于预设的亮度阈值的像素点,获取明斑图像,对明斑图像执行去噪声处理,得到去除噪声后的明斑图像,利用轮廓检测算法检测去除噪声后的明斑图像,可以得到具有最高轮廓层级的明斑图像中的明斑轮廓坐标。
其中,控制器对明斑图像执行去噪声处理的过程可以参照前述控制器在执行障碍物的轮廓检测时对障碍物的轮廓坐标执行去噪声处理的过程,控制器对明斑图像执行最高层级的轮廓对应的轮廓检测的过程可以参照前述对障碍物执行最高层级的轮廓对应的轮廓检测的过程,再此不做赘述。
在一些实施例中,光斑还包括暗斑,暗斑是由于光线被遮挡在投影面中形成的,呈现为阴影样式,光斑轮廓坐标集包括暗斑轮廓坐标,控制器可以对投影图像执行光斑目标的轮廓检测,获取投影图像中 暗斑对应的暗斑轮廓坐标。
控制器在对投影图像执行光斑目标的轮廓检测时,可以获取由投影图像转换到HSV色彩空间的HSV投影图像。HSV投影图像中每一个像素点均对应有亮度参数V、色调参数H和饱和度参数S,控制器可以利用Ostu算法(最大类间方差算法)或迭代法,基于HSV投影图像中每一个像素点的亮度参数V、色调参数H和饱和度参数S计算HSV投影图像的阴影阈值,用差异值算法,根据所述HSV投影图像中像素点的亮度参数、色调参数以及饱和度参数计算各像素点的差异值分量M,其中,M=(S-V)/(S+V+H)。
控制器遍历每一个像素点,获取差异值分量M大于阴影阈值的像素点,利用形态学闭运算和差异值分量M大于阴影阈值的像素点可以得到暗斑图像,对暗斑图像执行去噪声处理,得到去除噪声后的暗斑图像,利用轮廓检测算法检测去除噪声后的暗斑图像,可以得到具有最高轮廓层级的暗斑图像中的暗斑轮廓坐标。
其中,控制器对暗斑图像执行去噪声处理的过程可以参照前述控制器在执行障碍物的轮廓检测时对障碍物的轮廓坐标执行去噪声处理的过程,控制器对暗斑图像执行最高层级的轮廓对应的轮廓检测的过程可以参照前述对障碍物执行最高层级的轮廓对应的轮廓检测的过程,再此不做赘述。
在一些实施例中,参见图32,为本申请实施例中更新障碍物轮廓坐标集的流程示意图。控制器在获取明斑轮廓坐标和暗斑轮廓坐标后,可以调取障碍物轮廓坐标集并执行如下步骤:
S3201:根据障碍物轮廓坐标集,获取障碍物目标;
S3202:根据明斑轮廓坐标集,获取明斑目标;
S3203:计算明斑目标相对于障碍物目标的第一重合度;
S3204:如果第一重合度大于预设的重合阈值,将与明斑目标之间的第一重合度大于预设的重合阈值的障碍物目标对应的障碍物轮廓坐标从障碍物轮廓坐标集中删除;
S3205:根据暗斑轮廓坐标集,获取暗斑目标;
S3206:计算暗斑目标相对于障碍物目标的第二重合度;
S3207:如果第二重合度大于预设的重合阈值,将与暗斑目标之间的第二重合度大于预设的重合阈值的障碍物目标对应的障碍物轮廓坐标从障碍物轮廓坐标集中删除,以完成对障碍物轮廓坐标集的更新。
在一些实施例中,控制器可以基于更新后的障碍物轮廓坐标集,确定投影图像中的非障碍物区域。其中,非障碍物区域为投影图像中除障碍物对应的区域之外的区域。在一些可实现方式中,控制器获取障碍物轮廓坐标集中每个障碍物对应的障碍物轮廓坐标以及投影图像对应的图像坐标集。在图像坐标集中移除障碍物轮廓坐标,以根据移除障碍物轮廓坐标后的图像坐标集确定非障碍物区域,通常,非障碍物区域为多边形区域。
在一些实施例中,控制器在非障碍物区域中可以提取预投射区域,预投射区域为在非障碍物区域内的矩形区域,控制器根据提取的预投射区域和相机的拍摄参数计算投影面中的投影区域,以及控制光机将播放内容投射至所述投影区域。
图33示出了本申请实施例中矩形网格和非障碍物区域的示意图。参见图33,控制器可以获取投影图像的角点坐标,其中,角点坐标为投影图像四个顶点和/或四边中点对应的坐标。基于角点坐标构造矩形网格,矩形网格包括M×N个网格。接着,遍历全部网格,判断每个网格和非障碍物区域的包含关系。如果网格位于非障碍物区域中,则将该网格的网格标识赋值为1。如果网格不位于非障碍物区域中,则将该网格的网格标识赋值为0。
控制器可以在矩形网格中查找由网格标识为1的网格构成的矩形区域,并将矩形区域确定为预投射区域。进而,根据相机700的拍摄参数将投影图像中的预投射区域转换至投影面中的投影区域,并控制 光机220将播放内容投射至投影区域中,实现自动避障功能。
为了使用户能看到更多的播放内容从而提高用户的使用体验,控制器在矩形网格中查找由网格标识为1的网格构成的矩形区域过程中,应查找由网格标识为1的网格构成的最大矩形区域,即获取非障碍物区域中最大的矩形区域。在一些可实现方式中,遍历全部由网格标识为1的网格构成的矩形区域,得到每个矩形区域的像素点个数。提取像素点个数最多的矩形区域,基于具有最多像素点个数的矩形区域的边界坐标,确定预投射区域。
在一些实施例中,为了避免出现预投射区域的区域面积过小影响用户的观看体验,控制器在获取非障碍物区域中的矩形区域后,可以通过计算矩形区域的区域面积与投影图像的图像面积的面积比值,并设定面积阈值。如果面积比值大于面积阈值,说明矩形区域的区域面积满足区域面积条件,则将矩形区域确定为预投射区域。
需要说明的是,为了确保非障碍物区域是否符合用户实际环境和用户视觉机制,控制器在确定预投射区域时,如果查找的最大的矩形区域数量为多个,则在多个最大的矩形区域中提取以投影图形的中心点为扩展基线的矩形区域,以根据提取的矩形区域计算面积比值。
在一些实施例中,如果面积比值小于面积阈值,即非障碍物区域中最大的矩形区域的区域面积相对于投影图像的图像面积较小。控制器则执行更新非障碍物区域的过程。并在更新后的非障碍物区域中再次提取预投射区域,以根据预投射区域确定投影面中的投影区域。
在一些实施例中,为了提高投影图像的成像质量,控制器可以通过调整投影画面的亮度优化投影图像的画面质量,使得调整后的投影画面的亮度分布更为均匀。
在一些实施例中,参见图34,控制器可以执行如下步骤以调整投影画面:
S3401:控制器可以获取由投影画面转换到HSV色彩空间的HSV投影图像,其中,HSV投影图像中每一个像素点均对应有亮度参数V、色调参数H和饱和度参数S;
S3402:控制器可以对HSV投影图像中各像素点的亮度参数执行高斯函数卷积处理,得到各像素点对应的亮度分量;
S3403:控制器可以对亮度分量执行伽马函数处理,得到目标亮度参数;
S3404:控制器基于目标亮度参数、色调参数H和饱和度参数S,重组HSV投影图像,以调整HSV投影图像的亮度。
在一些实施例中,控制器可以获取由投影画面转换到灰度空间的灰度图像,并基于灰度图像中每一个像素点的亮度值,计算灰度图像的亮度平均值。以及,控制器可以控制将灰度图像分割为预设数量的图像区域,基于各图像区域中每一个像素点的亮度值,计算各图像区域的亮度平均值。控制器基于灰度图像的亮度平均值与图像区域的亮度平均值的差值,调整图像区域中每一个像素点的亮度值,其中,所述图像区域中每一个像素点的亮度值的调整幅度可以相同,也可以不同,以使调整后的各图像区域中像素点的亮度值对应的平均值与灰度图像的亮度平均值相等。
上述控制器对投影画面的亮度进行优化的方法仅为本申请示例性的提出的,本申请不对调整投影画面的亮度的方法做具体限制,在其他实施例中,控制器还可以利用自适应局部直方图均衡化算法直接对投影画面执行优化处理,使得优化后的投影画面的亮度分布更为均匀,提升画面质量。
在一些实施例中,本申请提出一种避障投影方法,应用于投影设备,投影设备包括光机、相机以及控制器,避障投影方法包括:
响应于用户输入的投影指令,获取相机拍摄的投影面中投影图像;基于颜色参数对投影图像执行障碍物目标和光斑目标的轮廓检测,得到障碍物轮廓坐标集和光斑轮廓坐标集,颜色参数包括亮度参数、色调参数以及饱和度参数;根据障碍物轮廓坐标集和光斑轮廓坐标集,获取光斑目标相对于障碍物目标 的重合度;如果重合度大于预设重合度阈值,删除障碍物轮廓坐标集中的障碍物目标对应的障碍物轮廓坐标;基于删除后的障碍物轮廓坐标集,确定非障碍物区域,以及根据非障碍物区域控制光机将播放内容投射至投影区域。
在一些实施例中,基于颜色参数对投影图像执行障碍物目标和光斑目标的轮廓检测步骤前,可以对投影图像进行灰度处理,得到灰度图像;利用边缘检测算法提取灰度图像中的边缘坐标;对边缘坐标执行去除噪声处理,得到去除噪声后的边缘坐标;基于边缘坐标位置上的像素点的颜色值,计算颜色阈值;基于灰度图像中颜色值大于颜色阈值的像素点,生成前景图像,以根据前景图像执行障碍物目标和光斑目标的轮廓检测。
在一些实施例中,利用边缘检测算法提取灰度图像中的边缘坐标之前,可以基于灰度图像中像素点的亮度值,计算灰度图像的亮度平均值;将灰度图像分割为预设数量的图像区域,基于图像区域中每一个像素点的亮度值,计算图像区域的亮度平均值;基于灰度图像的亮度平均值与图像区域的亮度平均值的差值,调整图像区域中每一个像素点的亮度值。
在一些实施例中,得到灰度图像包括:将投影图像转换到HSV色彩空间,得到HSV投影图像;对HSV投影图像的亮度参数执行高斯函数卷积处理,得到亮度分量;对亮度分量执行伽马函数处理,得到目标亮度参数;基于目标亮度参数,调整HSV投影图像的亮度;对调整后的HSV投影图像进行灰度处理,得到灰度图像。
在一些实施例中,得到障碍物轮廓坐标集包括:根据障碍物轮廓坐标集获取障碍物集,障碍物集包括至少一个轮廓层级为最外层的障碍物,轮廓层级用于表征障碍物之间的外包或内嵌关系;获取障碍物集中的障碍物的中心坐标、宽度和高度;根据中心坐标、宽度和高度计算障碍物对应的障碍物面积;
如果所述障碍物面积小于预设的面积阈值,则删除所述障碍物集中的所述障碍物;
根据更新后的所述障碍物集,更新所述障碍物轮廓坐标集。
在一些实施例中,光斑目标包括明斑,光斑轮廓坐标集包括明斑轮廓坐标,基于颜色参数对投影图像执行障碍物目标和光斑目标的轮廓检测,包括:基于前景图像中颜色值大于预设的亮度阈值的像素点,获取明斑图像;对明斑图像执行去噪声处理,得到去除噪声后的明斑图像;利用轮廓检测算法检测去除噪声后的明斑图像,得到明斑图像中的明斑轮廓坐标。
在一些实施例中,光斑目标包括暗斑,光斑轮廓坐标集包括暗斑轮廓坐标,基于颜色参数对投影图像执行障碍物目标和光斑目标的轮廓检测,包括:将投影图像转换到HSV色彩空间,得到HSV投影图像;利用最大类间方差算法,根据HSV投影图像中像素点的亮度参数、色调参数以及饱和度参数计算HSV投影图像的阴影阈值;利用差异值算法,根据HSV投影图像中像素点的亮度参数、色调参数以及饱和度参数计算像素点的差异值分量;基于HSV投影图像中差异值分量大于阴影阈值的像素点,获取暗斑图像;利用轮廓检测算法检测暗斑图像,得到暗斑图像中的暗斑轮廓坐标。
在一些实施例中,删除障碍物轮廓坐标集中的障碍物目标对应的障碍物轮廓坐标,还包括:如果检测到明斑目标相对于障碍物目标的重合度大于预设的重合阈值,则将与明斑目标之间的重合度大于预设的重合阈值的障碍物目标对应的障碍物轮廓坐标从障碍物轮廓坐标集中删除;以及,如果检测到暗斑目标相对于障碍物目标的重合度大于预设的重合阈值,则将与暗斑目标之间的重合度大于预设的重合阈值的障碍物目标对应的障碍物轮廓坐标从障碍物轮廓坐标集中删除。
在一些实施例中,根据非障碍物区域控制光机将播放内容投射至投影区域步骤中,包括:获取非障碍物区域中的矩形区域以及矩形区域中的像素点个数;基于具有最大像素点个数的矩形区域的边界坐标,确定预投射区域;根据预投射区域和相机的拍摄参数计算投影面中的投影区域,以及控制光机将播放内容投射至投影区域。

Claims (28)

  1. 一种投影设备,包括:
    镜头;
    光机,被配置为投射投影内容至投影面;
    距离传感器,被配置为检测投影面与光机之间的距离检测值;
    图像采集装置,被配置为拍摄所述投影内容对应的投影图像;
    控制器,被配置为:
    响应于投影设备的开机指令,启动所述投影设备;
    根据所述镜头、所述距离传感器和所述图像采集装置在第一平面上的位置关系,以及所述距离传感器的距离检测值,检测投影设备与所述投影面之间是否存在遮挡物;其中,所述第一平面为投影时投影设备上与所述投影面平行的平面;
    若检测到所述投影设备与所述投影面之间存在遮挡物,控制发出用于提示移除遮挡物的提示信息。
  2. 根据权利要求1所述的投影设备,所述图像采集装置包括第一相机和第二相机,所述第一相机和所述镜头设置于所述第一平面的第一位置,所述第二相机与所述距离传感器设置于所述第一平面的第二位置;所述控制器配置为按照如下方式检测投影设备与所述投影面之间是否存在遮挡物:
    若所述距离检测值不大于距离阈值,则确定所述投影设备与所述投影面之间存在遮挡物;其中,所述距离阈值是基于所述镜头与遮挡物之间的安全距离而设定;
    若所述距离检测值大于距离阈值,控制所述第一相机采集第一图像以及控制第二相机采集第二图像;
    根据所述第一图像和所述第二图像的相似度,检测投影设备与所述投影面之间是否存在遮挡物。
  3. 根据权利要求2所述的投影设备,所述控制器配置为按照如下方式检测投影设备与所述投影面之间是否存在遮挡物:
    计算所述第一图像和所述第二图像的相似度;
    若所述相似度小于相似度阈值,则确定所述投影设备与所述投影面之间存在遮挡物;
    若所述相似度不小于所述相似度阈值,则确定所述投影设备与所述投影面之间不存在遮挡物。
  4. 根据权利要求1所述的投影设备,所述镜头、所述距离传感器和所述图像采集装置设置于所述第一平面上的同一侧,所述控制器配置为按照如下方式检测投影设备与所述投影面之间是否存在遮挡物:
    若所述距离检测值小于投影面与光机之间的间隔距离,确定所述投影设备与所述投影面之间存在遮挡物;
    若所述距离检测值等于投影面与光机之间的间隔距离,确定所述投影设备与所述投影面之间不存在遮挡物。
  5. 根据权利要求1所述的投影设备,所述控制器还配置为:
    若检测到所述投影设备与所述投影面之间存在遮挡物,控制所述光机停止投射投影内容至投影面。
  6. 根据权利要求5所述的投影设备,所述控制器还配置为:
    在控制所述光机停止投射投影内容至投影面之后,每间隔预设时长,重新检测所述投影设备与所述投影面之间是否存在遮挡物;
    若检测到所述投影设备与所述投影面之间不存在遮挡物,控制所述光机投射投影内容至投影面。
  7. 根据权利要求1所述的投影设备,所述控制器,还被配置为:
    基于图像采集装置获取的投影图像,利用边缘检测算法识别投影设备的投影区域;在投影区域显示为矩形、或类矩形时,控制器通过预设算法获取上述矩形投影区域四个顶点的坐标值。
  8. 根据权利要求7所述的投影设备,所述控制器还被配置为:
    使用透视变换方法校正投影区域为矩形,计算矩形和投影截图的差值,以实现判断显示区域内是否有异物。
  9. 根据权利要求7所述的投影设备,所述控制器还被配置为:
    在实现对投影范围外一定区域的异物检测时,可将当前帧的图像采集装置内容、和上一帧的图像采集装置内容做差值,以判断投影范围外区域是否有异物进入;若判断有异物进入,投影设备自动触发防射眼功能。
  10. 根据权利要求7所述的投影设备,所述控制器还被配置为:
    利用飞行时间相机、或飞行时间传感器检测特定区域的实时深度变化;若深度值变化超过预设阈值,投影设备将自动触发防射眼功能。
  11. 根据权利要求7所述的投影设备,所述控制器还被配置为:基于采集的飞行时间数据、截图数据、以及图像采集装置数据分析判断是否需要开启防射眼功能。
  12. 根据权利要求7所述的投影设备,所述控制器还被配置为:
    若检测到特定对象位于预定区域内,将自动启动防射眼功能,以降低光机发出激光强度、降低用户界面显示亮度、并显示安全提示信息。
  13. 根据权利要求7所述的投影设备,所述控制器还被配置为:
    通过陀螺仪、或陀螺仪传感器对设备移动进行监测;向陀螺仪发出用于查询设备状态的信令,并接收陀螺仪反馈用于判定设备是否发生移动的信令。
  14. 根据权利要求7所述的投影设备,所述控制器还被配置为:
    在陀螺仪数据稳定预设时间长度后,控制启动触发梯形校正,在梯形校正进行时不响应遥控器按键发出的指令。
  15. 根据权利要求7所述的投影设备,所述控制器还被配置为:
    通过自动避障算法识别幕布,并利用投影变化,将投影图像校正至幕布内显示,实现与幕布边沿对齐的效果。
  16. 根据权利要求1所述的投影设备,所述控制器,还被配置为:
    响应于用户输入的投影指令,获取所述投影图像;
    基于颜色参数对所述投影图像执行障碍物目标和光斑目标的轮廓检测,得到障碍物轮廓坐标集和光斑轮廓坐标集,所述颜色参数包括亮度参数、色调参数以及饱和度参数;
    根据所述障碍物轮廓坐标集和所述光斑轮廓坐标集,获取所述光斑目标相对于所述障碍物目标的重合度;
    如果所述重合度大于预设重合度阈值,删除所述障碍物轮廓坐标集中的所述障碍物目标对应的障碍物轮廓坐标;
    基于删除后的所述障碍物轮廓坐标集,确定非障碍物区域,以及根据所述非障碍物区域控制所述光机将播放内容投射至投影区域。
  17. 根据权利要求16所述的投影设备,所述基于颜色参数对所述投影图像执行障碍物目标和光斑目标的轮廓检测步骤前,所述控制器还被配置为:
    对所述投影图像进行灰度处理,得到灰度图像;
    利用边缘检测算法提取所述灰度图像中的边缘坐标;
    对所述边缘坐标执行去除噪声处理,得到去除噪声后的所述边缘坐标;
    基于所述边缘坐标位置上的像素点的颜色值,计算颜色阈值;
    基于所述灰度图像中颜色值大于所述颜色阈值的像素点,生成前景图像,以根据所述前景图像执行所述障碍物目标和所述光斑目标的轮廓检测。
  18. 根据权利要求17所述的投影设备,所述利用边缘检测算法提取所述灰度图像中的边缘坐标步骤之前,所述控制器被配置为:
    基于所述灰度图像中像素点的亮度值,计算所述灰度图像的亮度平均值;
    将所述灰度图像分割为预设数量的图像区域,基于所述图像区域中每一个像素点的亮度值,计算所述图像区域的亮度平均值;
    基于所述灰度图像的亮度平均值与所述图像区域的亮度平均值的差值,调整所述图像区域中每一个像素点的亮度值。
  19. 根据权利要求17所述的投影设备,所述得到灰度图像步骤中,所述控制器被配置为:
    将所述投影图像转换到HSV色彩空间,得到HSV投影图像;
    对所述HSV投影图像的亮度参数执行高斯函数卷积处理,得到亮度分量;
    对所述亮度分量执行伽马函数处理,得到目标亮度参数;
    基于所述目标亮度参数,调整所述HSV投影图像的亮度;
    对调整后的HSV投影图像进行灰度处理,得到灰度图像。
  20. 根据权利要求17所述的投影设备,所述得到障碍物轮廓坐标集步骤中,所述控制器还被配置为:
    根据所述障碍物轮廓坐标集获取障碍物集,所述障碍物集包括至少一个轮廓层级为最外层的障碍物,所述轮廓层级用于表征所述障碍物之间的外包或内嵌关系;
    获取所述障碍物集中的所述障碍物的中心坐标、宽度和高度;
    根据所述中心坐标、宽度和高度计算所述障碍物对应的障碍物面积;
    如果所述障碍物面积小于预设的面积阈值,则删除所述障碍物集中的所述障碍物;
    根据更新后的所述障碍物集,更新所述障碍物轮廓坐标集。
  21. 根据权利要求20所述的投影设备,所述光斑目标包括明斑,所述光斑轮廓坐标集包括明斑轮廓坐标,所述基于颜色参数对所述投影图像执行障碍物目标和光斑目标的轮廓检测步骤中,所述控制器还被配置为:
    基于所述前景图像中颜色值大于预设的亮度阈值的像素点,获取明斑图像;
    对所述明斑图像执行去噪声处理,得到去除噪声后的所述明斑图像;
    利用轮廓检测算法检测所述去除噪声后的所述明斑图像,得到所述明斑图像中的所述明斑轮廓坐标。
  22. 根据权利要求20所述的投影设备,所述光斑目标包括暗斑,所述光斑轮廓坐标集包括暗斑轮廓坐标,所述基于颜色参数对所述投影图像执行障碍物目标和光斑目标的轮廓检测步骤中,所述控制器还被配置为:
    将所述投影图像转换到HSV色彩空间,得到HSV投影图像;
    利用最大类间方差算法,根据所述HSV投影图像中像素点的亮度参数、色调参数以及饱和度参数计算所述HSV投影图像的阴影阈值;
    利用差异值算法,根据所述HSV投影图像中像素点的亮度参数、色调参数以及饱和度参数计算所述像素点的差异值分量;
    基于所述HSV投影图像中差异值分量大于所述阴影阈值的像素点,获取暗斑图像;
    利用轮廓检测算法检测所述暗斑图像,得到所述暗斑图像中的所述暗斑轮廓坐标。
  23. 根据权利要求21或22所述的投影设备,所述删除所述障碍物轮廓坐标集中的所述障碍物目标 对应的障碍物轮廓坐标步骤中,所述控制器被配置为:
    如果检测到所述明斑目标相对于所述障碍物目标的重合度大于预设的重合阈值,则将与所述明斑目标之间的重合度大于预设的重合阈值的所述障碍物目标对应的障碍物轮廓坐标从所述障碍物轮廓坐标集中删除;以及,
    如果检测到所述暗斑目标相对于所述障碍物目标的重合度大于预设的重合阈值,则将与所述暗斑目标之间的重合度大于预设的重合阈值的所述障碍物目标对应的障碍物轮廓坐标从所述障碍物轮廓坐标集中删除。
  24. 根据权利要求16所述的投影设备,所述根据所述非障碍物区域控制所述光机将播放内容投射至投影区域步骤中,所述控制器还被配置为:
    获取所述非障碍物区域中的矩形区域以及所述矩形区域中的像素点个数;
    基于具有最大像素点个数的矩形区域的边界坐标,确定所述预投射区域;
    根据所述预投射区域和所述相机的拍摄参数计算所述投影面中的投影区域,以及控制所述光机将播放内容投射至所述投影区域。
  25. 一种用于投影设备的避障投影方法,所述投影设备包括光机、镜头、距离传感器、图像采集装置和控制器,所述方法包括:
    响应于投影设备的开机指令,启动所述投影设备;
    根据所述镜头、所述距离传感器和所述图像采集装置在第一平面上的位置关系,以及所述距离传感器的距离检测值,检测投影设备与投影面之间是否存在遮挡物;其中,所述第一平面为投影时投影设备上与所述投影面平行的平面,所述投影面用于接收并显示所述光机投射的投影内容;
    若检测到所述投影设备与所述投影面之间存在遮挡物,控制发出用于提示移除遮挡物的提示信息。
  26. 根据权利要求25所述的方法,所述图像采集装置包括第一相机和第二相机,所述第一相机和所述镜头设置于所述第一平面的第一位置,所述第二相机与所述距离传感器设置于所述第一平面的第二位置;所述检测投影设备与所述投影面之间是否存在遮挡物,包括:
    若所述距离检测值不大于距离阈值,则确定所述投影设备与所述投影面之间存在遮挡物;其中,所述距离阈值是基于所述镜头与遮挡物之间的安全距离而设定;
    若所述距离检测值大于距离阈值,控制所述第一相机采集第一图像以及控制第二相机采集第二图像;
    根据所述第一图像和所述第二图像的相似度,检测投影设备与所述投影面之间是否存在遮挡物。
  27. 根据权利要求26所述的方法,所述检测投影设备与所述投影面之间是否存在遮挡物,包括:
    计算所述第一图像和所述第二图像的相似度;
    若所述相似度小于相似度阈值,则确定所述投影设备与所述投影面之间存在遮挡物;
    若所述相似度不小于所述相似度阈值,则确定所述投影设备与所述投影面之间不存在遮挡物。
  28. 根据权利要求25所述的方法,所述镜头、所述距离传感器和所述图像采集装置设置于所述第一平面上的同一侧,所述检测投影设备与所述投影面之间是否存在遮挡物,包括:
    若所述距离检测值小于投影面与光机之间的间隔距离,确定所述投影设备与所述投影面之间存在遮挡物;
    若所述距离检测值等于投影面与光机之间的间隔距离,确定所述投影设备与所述投影面之间不存在遮挡物。
PCT/CN2022/132249 2021-11-16 2022-11-16 投影设备及避障投影方法 WO2023088303A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280063168.XA CN118020288A (zh) 2021-11-16 2022-11-16 投影设备及避障投影方法
US18/666,806 US20240305754A1 (en) 2021-11-16 2024-05-16 Projection device and obstacle avoidance projection method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202111355866 2021-11-16
CN202111355866.0 2021-11-16
CN202210590075.4A CN114885141A (zh) 2022-05-26 2022-05-26 一种投影检测方法及投影设备
CN202210590075.4 2022-05-26
CN202210600617.1A CN115002432B (zh) 2022-05-30 2022-05-30 一种投影设备及避障投影方法
CN202210600617.1 2022-05-30

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/666,806 Continuation US20240305754A1 (en) 2021-11-16 2024-05-16 Projection device and obstacle avoidance projection method

Publications (1)

Publication Number Publication Date
WO2023088303A1 true WO2023088303A1 (zh) 2023-05-25

Family

ID=86396250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/132249 WO2023088303A1 (zh) 2021-11-16 2022-11-16 投影设备及避障投影方法

Country Status (3)

Country Link
US (1) US20240305754A1 (zh)
CN (1) CN118020288A (zh)
WO (1) WO2023088303A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958707A (zh) * 2023-08-18 2023-10-27 武汉市万睿数字运营有限公司 一种基于球机监控设备的图像分类方法、装置及相关介质
CN118474318A (zh) * 2024-07-09 2024-08-09 合肥全色光显科技有限公司 投影融合方法、装置、系统及电子设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009276592A (ja) * 2008-05-15 2009-11-26 Seiko Epson Corp プロジェクタ、および、プロジェクタの制御方法
US20100177929A1 (en) * 2009-01-12 2010-07-15 Kurtz Andrew F Enhanced safety during laser projection
JP2014163954A (ja) * 2013-02-21 2014-09-08 Seiko Epson Corp プロジェクター、およびプロジェクターの制御方法
CN105959659A (zh) * 2016-04-28 2016-09-21 乐视控股(北京)有限公司 一种实现投影仪自适应调整的方法及投影仪
JP2018004951A (ja) * 2016-07-01 2018-01-11 株式会社リコー 画像投写装置
JP2019120884A (ja) * 2018-01-10 2019-07-22 凸版印刷株式会社 投影画像制御装置、投影画像制御システム、及び投影画像制御方法
US20190302594A1 (en) * 2018-03-27 2019-10-03 Seiko Epson Corporation Display device and method for controlling display device
WO2020250739A1 (ja) * 2019-06-14 2020-12-17 富士フイルム株式会社 投影制御装置、投影装置、投影制御方法、及び投影制御プログラム
CN114885141A (zh) * 2022-05-26 2022-08-09 海信视像科技股份有限公司 一种投影检测方法及投影设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009276592A (ja) * 2008-05-15 2009-11-26 Seiko Epson Corp プロジェクタ、および、プロジェクタの制御方法
US20100177929A1 (en) * 2009-01-12 2010-07-15 Kurtz Andrew F Enhanced safety during laser projection
JP2014163954A (ja) * 2013-02-21 2014-09-08 Seiko Epson Corp プロジェクター、およびプロジェクターの制御方法
CN105959659A (zh) * 2016-04-28 2016-09-21 乐视控股(北京)有限公司 一种实现投影仪自适应调整的方法及投影仪
JP2018004951A (ja) * 2016-07-01 2018-01-11 株式会社リコー 画像投写装置
JP2019120884A (ja) * 2018-01-10 2019-07-22 凸版印刷株式会社 投影画像制御装置、投影画像制御システム、及び投影画像制御方法
US20190302594A1 (en) * 2018-03-27 2019-10-03 Seiko Epson Corporation Display device and method for controlling display device
WO2020250739A1 (ja) * 2019-06-14 2020-12-17 富士フイルム株式会社 投影制御装置、投影装置、投影制御方法、及び投影制御プログラム
CN114885141A (zh) * 2022-05-26 2022-08-09 海信视像科技股份有限公司 一种投影检测方法及投影设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958707A (zh) * 2023-08-18 2023-10-27 武汉市万睿数字运营有限公司 一种基于球机监控设备的图像分类方法、装置及相关介质
CN116958707B (zh) * 2023-08-18 2024-04-23 武汉市万睿数字运营有限公司 一种基于球机监控设备的图像分类方法、装置及相关介质
CN118474318A (zh) * 2024-07-09 2024-08-09 合肥全色光显科技有限公司 投影融合方法、装置、系统及电子设备

Also Published As

Publication number Publication date
US20240305754A1 (en) 2024-09-12
CN118020288A (zh) 2024-05-10

Similar Documents

Publication Publication Date Title
CN115022606B (zh) 一种投影设备及避障投影方法
WO2023088303A1 (zh) 投影设备及避障投影方法
US5596368A (en) Camera aiming mechanism and method
US9131145B2 (en) Image pickup apparatus and control method therefor
WO2023087947A1 (zh) 一种投影设备和校正方法
CN115002432B (zh) 一种投影设备及避障投影方法
WO2024174721A1 (zh) 投影设备及调整投影画面尺寸的方法
US8199247B2 (en) Method for using flash to assist in focal length detection
CN114866751B (zh) 一种投影设备及触发校正方法
JP2004502212A (ja) 画像処理による赤目修正
CN115002433A (zh) 投影设备及roi特征区域选取方法
CN113949852A (zh) 投影方法、投影设备及存储介质
US8436934B2 (en) Method for using flash to assist in focal length detection
CN116055696A (zh) 一种投影设备及投影方法
CN115883803A (zh) 投影设备及投影画面矫正方法
WO2024055793A1 (zh) 投影设备及投影画质调整方法
WO2024066776A1 (zh) 投影设备及投影画面处理方法
WO2023087960A1 (zh) 投影设备及调焦方法
CN114885141A (zh) 一种投影检测方法及投影设备
CN114928728A (zh) 投影设备及异物检测方法
CN114885142B (zh) 一种投影设备及调节投影亮度方法
WO2023087951A1 (zh) 一种投影设备及投影图像的显示控制方法
CN115695756B (zh) 一种投影图像的几何校正方法及系统
CN114270220B (zh) 利用激光脉冲串突发和门控传感器的3d主动深度感测
CN118158367A (zh) 一种投影设备及投影画面入幕方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894835

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202280063168.X

Country of ref document: CN