WO2023087950A1 - 一种投影设备及显示控制方法 - Google Patents

一种投影设备及显示控制方法 Download PDF

Info

Publication number
WO2023087950A1
WO2023087950A1 PCT/CN2022/122810 CN2022122810W WO2023087950A1 WO 2023087950 A1 WO2023087950 A1 WO 2023087950A1 CN 2022122810 W CN2022122810 W CN 2022122810W WO 2023087950 A1 WO2023087950 A1 WO 2023087950A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
projection
area
projection device
controller
Prior art date
Application number
PCT/CN2022/122810
Other languages
English (en)
French (fr)
Inventor
卢平光
何营昊
王昊
王英俊
岳国华
唐高明
陈先义
孙超
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Priority to CN202280063192.3A priority Critical patent/CN118104230A/zh
Publication of WO2023087950A1 publication Critical patent/WO2023087950A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/317Convergence or focusing systems
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/14Details
    • G03B21/53Means for automatic focusing, e.g. to compensate thermal effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence

Definitions

  • the present application relates to the technical field of display devices, and in particular, to a projection device and a display control method.
  • Projection equipment is a device that can project images or videos onto the screen for display. It can be connected to computers, VCD, DVD, BD, game consoles, DV, radio and television signal sources, video signal sources, etc. through different interfaces to play corresponding video signal.
  • the Prime Minister’s projection device will capture the image of the curtain area; and then perform binary image processing on the acquired image, so that the outline of the object in the image can be displayed more clearly ; Finally, based on the binarized image, the projection device extracts all closed contours contained in it, and identifies the closed contour with the largest area and consistent internal color as the projection area of the curtain. Or, when there is a large area of solid-colored walls around the screen to be projected and the edges of the walls form a closed outline, the projection device will recognize the wall as a screen, causing the playback content to be projected on the wall instead of the specified screen.
  • the present application provides a projection device, including: a projection component, configured to project playback content onto a screen corresponding to the projection device; a camera, configured to obtain an image; and a controller configured to: obtain an image based on the camera The brightness analysis of the gray scale image of the first image is performed, and the first image is binarized to obtain the second image; the first-level closed contour contained in the second image is determined, and the first-level closed contour contains a second-level closed contour;
  • the projection component is controlled to project the playback content onto the secondary closed contour, and the secondary closed contour corresponds to the projection area of the curtain; wherein, the curtain contains Corresponding to the curtain edge band of the primary closed contour, the projection area is surrounded by the curtain edge band.
  • the present application also provides a projection display control method for a projection device, the method comprising: binarizing the first image based on the brightness analysis of the first image grayscale acquired by the camera of the projection device A second image is obtained, the first image is an environment image, and the projection device includes: a projection component for projecting content to a screen corresponding to the projection device; determining the first-level closure included in the second image contour, the first-level closed contour contains a second-level closed contour; when it is determined that the second-level closed contour is a convex quadrilateral, the playback content is projected onto the second-level closed contour, and the second-level closed contour corresponds to the A projection area; wherein said screen comprises a screen edge band corresponding to said primary closed contour, said projection area being surrounded by said screen edge band.
  • FIG. 1A is a schematic diagram of placement of projection equipment according to an embodiment of the present application.
  • FIG. 1B is a schematic diagram of an optical path of a projection device according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a circuit structure of a projection device according to an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a projection device according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a projection device according to another embodiment of the present application.
  • FIG. 5 is a schematic diagram of a circuit structure of a projection device according to an embodiment of the present application.
  • FIG. 6A is a schematic diagram of a screen corresponding to a projection device according to an embodiment of the present application.
  • FIG. 6B is a schematic diagram of the first image of the environment where the projection device is located according to another embodiment of the present application.
  • FIG. 6C is a schematic diagram of the first image and its corresponding grayscale image in another embodiment of the present application.
  • FIG. 6D is a schematic diagram of a second image after binarization of the environment image where the projection device is located according to an embodiment of the present application;
  • FIG. 6E is a schematic diagram of binarization of a closed contour corresponding to a curtain in an embodiment of the present application.
  • FIG. 6F is a schematic diagram of a projection device identifying a large-area solid-color wall as a screen projection area according to an embodiment of the present application.
  • Fig. 6G is a schematic diagram of concave and convex quadrilaterals according to an embodiment of the present application.
  • FIG. 7A is a schematic diagram of a system framework for realizing display control by a projection device according to an embodiment of the present application.
  • FIG. 7B is a schematic diagram of the signaling interaction sequence of the projection device realizing the emissive eye function according to another embodiment of the present application.
  • FIG. 7C is a schematic diagram of a signaling interaction sequence for realizing a display image correction function of a projection device according to another embodiment of the present application.
  • FIG. 7D is a schematic flow diagram of a projection device implementing an autofocus algorithm according to another embodiment of the present application.
  • FIG. 7E is a schematic flow diagram of a projection device implementing trapezoidal correction and obstacle avoidance algorithms according to another embodiment of the present application.
  • FIG. 7F is a schematic flow diagram of a projection device implementing a screen entry algorithm according to another embodiment of the present application.
  • FIG. 7G is a schematic flow diagram of a projection device implementing an anti-eye algorithm according to another embodiment of the present application.
  • FIG. 8 is a schematic diagram of the lens structure of the projection device in the embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a distance sensor and a camera of a projection device in an embodiment of the present application.
  • FIG. 10 is a schematic flow diagram of a projection device performing obstacle avoidance projection in an embodiment of the present application.
  • FIG. 11 is a schematic diagram of obstacle sets and outline levels in the embodiment of the present application.
  • Fig. 12 is a schematic diagram of a rectangular grid and a non-obstacle area in the embodiment of the present application.
  • FIG. 13 is a schematic diagram of a projection device generating a second target set in an embodiment of the present application.
  • FIG. 14 is a schematic diagram of generating a third target set by a projection device in an embodiment of the present application.
  • FIG. 1A is a schematic diagram of placement of projection equipment according to an embodiment of the present application.
  • a projection device provided by the present application includes a projection screen 1 and a device 2 for projection.
  • the projection screen 1 is fixed on the first position, and the device 2 for projection is placed on the second position, so that the projected picture coincides with the projection screen 1.
  • This step is performed by professional after-sales technicians, that is, the second position is Optimal placement of device 2 for projection.
  • FIG. 1B is a schematic diagram of an optical path of a projection device according to an embodiment of the present application.
  • the light-emitting components of the projection device can be implemented as light sources such as lasers or LEDs.
  • the laser-type projection device will be taken as an example to illustrate the projection device provided by this application and the projection display control scheme that is automatically thrown into the screen area.
  • the projection device may include a laser light source 100 , an optical engine 200 , a lens 300 , and a projection medium 400 .
  • the laser light source 100 provides illumination for the light machine 200
  • the light machine 200 modulates the light beam, and outputs it to the lens 300 for imaging, and projects it to the projection medium 400 to form a projection image.
  • the laser light source of the projection device includes a projection component and an optical lens component.
  • the projection component is implemented as a laser component in the laser type projection device provided in this application, which will not be described in detail below.
  • the light beam emitted by the laser component can pass through the optical lens component to provide illumination for the optical machine.
  • optical lens components require a higher level of environmental cleanliness and airtight level sealing; while the chamber where the laser component is installed can be sealed with a lower level of dustproof level to reduce sealing costs.
  • the light engine 200 of the projection device may be implemented to include a blue light engine, a green light engine, and a red light engine, and may also include a heat dissipation system, a circuit control system, and the like. It should be noted that, in some embodiments, the light emitting component of the projection device may also be realized by an LED light source.
  • the present application provides a projection device, including a three-color light engine and a controller; wherein, the three-color light engine is used to modulate and generate laser light containing pixels in a user interface, including a blue light engine, a green light engine, and a red light engine.
  • the controller is configured to: obtain the average gray value of the user interface; when it is determined that the average gray value is greater than the first threshold and its duration is greater than the time threshold, control the operating current value of the red optical machine according to The preset gradient value is lowered to reduce the heating of the three-color light engine. It can be found that by reducing the operating current of the red light engine integrated in the three-color light engine, the overheating of the red light engine can be controlled, so as to control the overheating of the three-color light engine and the projection device.
  • the light machine 200 can be implemented as a three-color light machine, and the three-color light machine integrates a blue light machine, a green light machine, and a red light machine.
  • the implementation manner provided by the present application will be described by taking the light machine 200 of the projection device as an example including a blue light machine, a green light machine, and a red light machine.
  • the optical system of the projection device is composed of a light source part and an optical machine part.
  • the function of the light source part is to provide illumination for the light machine, and the function of the light machine part is to modulate the illumination beam provided by the light source, and finally form Projected screen.
  • the light source part specifically includes a housing, a laser component, and an optical lens component.
  • the light beam emitted by the laser component is shaped and combined by the optical lens component, so as to provide illumination for the optical machine.
  • laser components include light-emitting chips, collimating lenses, wires and other devices, but they are usually packaged components.
  • the cleanliness of optical lenses to the environment The requirements will be higher, because if there is dust on the surface of the lens, on the one hand, it will affect the light processing effect of the lens, resulting in the attenuation of the emitted light brightness, and finally affect the effect of the projection device projecting the image through the lens; on the other hand, the dust will absorb high-energy light.
  • the laser beam generates heat, which can easily damage the lens.
  • the optical lens assembly includes at least a convex lens, wherein the convex lens is an integral part of the telescope system, and the telescope system is usually composed of a convex lens and a concave lens, which are used to reduce the beam of the laser beam with a larger area to form a smaller area.
  • laser beam usually has a large surface and is usually installed near the light output of the laser. It can receive a large area of laser beams and also facilitate the transmission of beams as a large window to reduce light loss.
  • the optical lens assembly may also include a concave lens, a light combining mirror, a light homogenizing component, or a speckle dissipating component, etc., which are used to reshape and combine the laser beam to meet the requirements of the lighting system.
  • the laser assembly includes a red laser module, a green laser module, and a blue laser module, and each laser module and the corresponding installation port are dust-proof through a sealing ring (either fluorine rubber or other sealing materials can be used). Sealed installation.
  • FIG. 2 is a schematic diagram of a circuit structure of a projection device according to an embodiment of the present application.
  • the projection device provided by the present disclosure includes multiple sets of lasers. By setting a brightness sensor in the light output path of the laser light source, the brightness sensor can detect the first brightness value of the laser light source and send the first brightness value to Shows the control circuit.
  • the display control circuit can obtain the second brightness value corresponding to the driving current of each laser, and when it is determined that the difference between the second brightness value of the laser and the first brightness value of the laser is greater than the difference threshold, determine that the laser has COD (Catastrophic optical damage) failure; then the display control circuit can adjust the current control signal of the corresponding laser driver component of the laser until the difference is less than or equal to the difference threshold, thereby eliminating the COD failure of the blue laser; the projection device The COD failure of the laser can be eliminated in time, the damage rate of the laser is reduced, and the image display effect of the projection device is ensured.
  • COD Catastrophic optical damage
  • the projection device may include a display control circuit 10, a laser light source 20, at least one laser driving component 30, and at least one brightness sensor 40, and the laser light source 20 may include a one-to-one corresponding At least one laser.
  • the at least one refers to one or more, and a plurality refers to two or more.
  • the projection device includes a laser driver assembly 30 and a brightness sensor 40.
  • the laser light source 20 includes three lasers that correspond one-to-one to the laser driver assembly 30, and the three lasers can be blue lasers respectively. 201 , red laser 202 and green laser 203 .
  • the blue laser 201 is used to emit blue laser
  • the red laser 202 is used to emit red laser
  • the green laser 203 is used to emit green laser.
  • the laser driving component 30 may be implemented to include a plurality of sub-laser driving components corresponding to lasers of different colors.
  • the display control circuit 10 is used to output the primary color enable signal and the primary color current control signal to the laser drive assembly 30 to drive the laser to emit light.
  • the display control circuit 10 is connected to the laser drive assembly 30 for outputting At least one enable signal corresponding to the three primary colors of each frame image in the multi-frame display image, the at least one enable signal is respectively transmitted to the corresponding laser drive assembly 30, and the output and each frame image At least one current control signal corresponding to each of the three primary colors transmits the at least one current control signal to the corresponding laser driving component 30 respectively.
  • the display control circuit 10 may be a microcontroller unit (microcontroller unit, MCU), also known as a single-chip microcomputer.
  • the current control signal may be a pulse width modulation (pulse width modulation, PWM) signal.
  • the display control circuit 10 can output the blue PWM signal B_PWM corresponding to the blue laser 201 based on the blue primary color component of the image to be displayed, and output the blue PWM signal B_PWM corresponding to the red laser 202 based on the red primary color component of the image to be displayed.
  • the red PWM signal R_PWM outputs the green PWM signal G_PWM corresponding to the green laser 203 based on the green primary color component of the image to be displayed.
  • the display control circuit can output the enable signal B_EN corresponding to the blue laser 201 based on the lighting duration of the blue laser 201 in the driving cycle, and output the enable signal B_EN corresponding to the red laser 202 based on the lighting duration of the red laser 202 in the driving cycle.
  • the enable signal R_EN of the green laser 203 outputs an enable signal G_EN corresponding to the green laser 203 based on the lighting duration of the green laser 203 in the driving cycle.
  • the laser drive assembly 30 is connected to the corresponding laser, and is used to provide a corresponding drive current to the connected laser in response to the received enabling signal and current control signal, and each laser is used for the drive current provided by the laser drive assembly 30 Glow under the drive.
  • the blue laser 201 , the red laser 202 and the green laser 203 are respectively connected to the laser driving assembly 30 .
  • the laser driving component 30 can provide corresponding driving current to the blue laser 201 in response to the blue PWM signal B_PWM and the enable signal B_EN sent by the display control circuit 10 .
  • the blue laser 201 is used to emit light under the driving current.
  • the brightness sensor is arranged in the light output path of the laser light source, usually on one side of the light output path, without blocking the light path.
  • at least one brightness sensor 40 is arranged in the light path of the laser light source 20, and each brightness sensor is connected with the display control circuit 10 for detecting the first brightness value of a laser, and the first brightness value sent to the display control circuit 10.
  • the display control circuit 10 is also used to obtain the second brightness value corresponding to the driving current of each laser, if it is detected that the difference between the second brightness value of the laser and the first brightness value of the laser is greater than
  • the difference threshold indicates that the laser has a COD failure
  • the display control circuit 10 can adjust the current control signal of the laser drive assembly 30 until the difference is less than or equal to the difference threshold, that is, the COD of the laser is eliminated by reducing the driving current of the laser. Fault.
  • both the first luminance value and the second luminance value are represented as light output power values, wherein the second luminance value may be pre-stored, or may be a luminance value sent back by a luminance sensor in a normal lighting state.
  • the display control circuit will reduce the current control signal of the laser drive component corresponding to the laser, and continuously collect and compare the brightness signals returned by the brightness sensor.
  • the display control circuit 10 does not need to adjust and The laser corresponds to the current control signal of the laser driving component 30 .
  • the display control circuit 10 may store the corresponding relationship between the current and the brightness value.
  • the luminance value corresponding to each current in the correspondence relation is the initial luminance value that the laser can emit when the laser works normally under the driving of the current (that is, no COD failure occurs).
  • the brightness value may be the initial brightness when the laser is first turned on when it is driven by the current.
  • the display control circuit 10 can obtain the second brightness value corresponding to the driving current of each laser from the corresponding relationship, the driving current is the current actual working current of the laser, and the second brightness value corresponding to the driving current is the brightness value that the laser can emit when it works normally under the driving current.
  • the difference threshold may be a fixed value pre-stored in the display control circuit 10 .
  • the display control circuit 10 when the display control circuit 10 adjusts the current control signal of the laser drive component 30 corresponding to the laser, it can reduce the duty cycle of the current control signal of the laser drive component 30 corresponding to the laser, thereby reducing the driving force of the laser. current.
  • the brightness sensor 40 can detect the first brightness value of the blue laser 201 and send the first brightness value to the display control circuit 10 .
  • the display control circuit 10 can obtain the driving current of the blue laser 201, and obtain the second brightness value corresponding to the driving current from the corresponding relationship between the current and the brightness value. Then detect whether the difference between the second luminance value and the first luminance value is greater than the difference threshold, if the difference is greater than the difference threshold, it indicates that the blue laser 201 has a COD failure, and the display control circuit 10 can reduce the difference with the difference threshold.
  • the current control signal of the laser driving component 30 corresponding to the blue laser 201 .
  • the display control circuit 10 can acquire the first luminance value of the blue laser 201 and the second luminance value corresponding to the driving current of the blue laser 201 again, and the difference between the second luminance value and the first luminance value is greater than
  • the difference threshold is reached, the current control signal of the laser driving component 30 corresponding to the blue laser 201 is lowered again. This loops until the difference is less than or equal to the difference threshold. Therefore, by reducing the driving current of the blue laser 201 , the COD failure of the blue laser 201 is eliminated.
  • the display control circuit 10 can monitor in real time whether each laser has COD failure. And when it is determined that any laser has a COD failure, the COD failure of the laser is eliminated in time, the duration of the COD failure of the laser is reduced, the damage of the laser is reduced, and the image display effect of the projection device is ensured.
  • FIG. 3 is a schematic structural diagram of a projection device according to an embodiment of the present application.
  • the laser light source 20 in the projection device may include a blue laser 201, a red laser 202 and a green laser 203 which are set independently, and the projection device may also be called a three-color projection device, the blue laser 201, the red laser 201 Both the laser 202 and the green laser 203 are MCL-type packaged lasers, which are small in size and facilitate the compact arrangement of optical paths.
  • the at least one brightness sensor 40 may include a first brightness sensor 401, a second brightness sensor 402 and a third brightness sensor 403, wherein the first brightness sensor 401 is a blue light brightness sensor or a white light brightness sensor sensor, the second brightness sensor 402 is a red light brightness sensor or a white light brightness sensor, and the third brightness sensor 403 is a green light brightness sensor or a white light brightness sensor.
  • the first brightness sensor 401 is set in the light output path of the blue laser 201, specifically, it can be set on one side of the light output path of the collimated light beam of the blue laser 201.
  • the second brightness sensor 402 is set in the red light path.
  • the light output path of the laser 202 it is specifically arranged on one side of the light output path of the collimated beam of the red laser 201. side of the light path. Since the laser light emitted by the laser does not attenuate in its light path, the brightness sensor is arranged in the light path of the laser, which improves the accuracy of the brightness sensor for detecting the first brightness value of the laser.
  • the display control circuit 10 is also used to read the brightness value detected by the first brightness sensor 401 when controlling the blue laser 201 to emit blue laser light. And when the blue laser 201 is controlled to be turned off, stop reading the brightness value detected by the first brightness sensor 401 .
  • the display control circuit 10 is also used to read the brightness value detected by the second brightness sensor 402 when controlling the red laser 202 to emit red laser light, and stop reading the brightness value detected by the second brightness sensor 402 when controlling the red laser 202 to be turned off. Brightness value.
  • the display control circuit 10 is also used to read the luminance value detected by the third luminance sensor 403 when controlling the green laser 203 to emit green laser light, and stop reading the luminance value detected by the third luminance sensor 403 when controlling the green laser 203 to turn off. brightness value.
  • FIG. 4 is a schematic structural diagram of a projection device according to another embodiment of the present application.
  • the projection device may further include a light pipe 110, which is used as a light-collecting optical component for receiving and homogenizing the output three-color laser light in a combined light state.
  • a light pipe 110 which is used as a light-collecting optical component for receiving and homogenizing the output three-color laser light in a combined light state.
  • the brightness sensor 40 may include a fourth brightness sensor 404, which may be a white light brightness sensor.
  • the fourth brightness sensor 404 is disposed in the light exit path of the light pipe 110 , for example, on the light exit side of the light pipe, close to its light exit surface.
  • the above-mentioned fourth brightness sensor is a white light brightness sensor.
  • the display control circuit 10 is also used to read the brightness value detected by the fourth brightness sensor 404 when controlling the blue laser 201, the red laser 202 and the green laser 203 to turn on time-divisionally, so as to ensure that the fourth brightness sensor 404 can detect to the first brightness value of the blue laser 201 , the first brightness value of the red laser 202 and the first brightness value of the green laser 203 . And when the blue laser 201 , the red laser 202 and the green laser 203 are all turned off, stop reading the brightness value detected by the fourth brightness sensor 404 .
  • the projection device may further include a fourth dichroic film 604, a fifth dichroic film 605, a fifth reflector 904, a second lens assembly 90, and a diffusion wheel 150.
  • TIR lens 120, DMD 130 and projection lens 140 the second lens assembly 90 includes a first lens 901 , a second lens 902 and a third lens 903 .
  • the fourth dichroic film 604 can transmit blue laser light and reflect green laser light.
  • the fifth dichroic film 605 can transmit red laser light and reflect green laser light and blue laser light.
  • the blue laser light emitted by the blue laser 201 passes through the fourth dichroic film 604 , is reflected by the fifth dichroic film 605 , and then enters the first lens 901 for condensing.
  • the red laser light emitted by the red laser 202 passes through the fifth dichroic film 605 and directly enters the first lens 901 for condensing.
  • the green laser light emitted by the green laser 203 is reflected by the fifth reflector 904 , reflected by the fourth dichroic film 604 and the fifth dichroic film 605 in turn, and then enters the first lens 901 for condensing.
  • the blue laser, red laser and green laser collected by the first lens 901 pass through the rotating diffusion wheel 150 in time-sharing to dissipate speckles, and project to the light guide 110 for uniform light, then pass through the second lens 902 and the third lens After being shaped by 903, it enters the TIR lens 120 for total reflection, and after being reflected by the DMD 130, passes through the TIR lens 120, and finally is projected onto the display screen through the projection lens 140 to form an image to be displayed.
  • FIG. 5 is a schematic diagram of a circuit structure of a projection device according to an embodiment of the present application.
  • the laser driving component 30 may include a driving circuit 301 , a switching circuit 302 and an amplifying circuit 303 .
  • the driving circuit 301 may be a driving chip.
  • the switch circuit 302 may be a metal-oxide-semiconductor (MOS) transistor.
  • the driving circuit 301 is respectively connected with the switch circuit 302 , the amplification circuit 303 and the corresponding laser included in the laser light source 20 .
  • the driving circuit 301 is used to output the driving current to the corresponding laser in the laser light source 20 through the VOUT terminal based on the current control signal sent by the display control circuit 10 , and transmit the received enabling signal to the switch circuit 302 through the ENOUT terminal.
  • the laser may include n sub-lasers connected in series, which are respectively sub-lasers LD1 to LDn. n is a positive integer greater than 0.
  • the switch circuit 302 is connected in series in the current path of the laser, and is used to control the conduction of the current path when the received enable signal is at an effective potential.
  • the amplifying circuit 303 is respectively connected to the detection node E in the current path of the laser light source 20 and the display control circuit 10, and is used to convert the detected driving current of the laser component 201 into a driving voltage, amplify the driving voltage, and convert the amplified The driving voltage is transmitted to the display control circuit 10 .
  • the display control circuit 10 is further configured to determine the amplified driving voltage as the driving current of the laser, and obtain a second brightness value corresponding to the driving current.
  • the amplifying circuit 303 may include: a first operational amplifier A1, a first resistor (also known as a sampling power resistor) R1, a second resistor R2, a third resistor R3 and a fourth resistor R4.
  • the non-inverting input terminal (also known as the positive terminal) of the first operational amplifier A1 is connected to one end of the second resistor R2, and the inverting input terminal (also known as the negative terminal) of the first operational amplifier A1 is respectively connected to one terminal of the third resistor R3 and the second resistor R3.
  • One end of the four resistors R4 is connected, and the output end of the first operational amplifier A1 is respectively connected to the other end of the fourth resistor R4 and the processing sub-circuit 3022 .
  • One end of the first resistor R1 is connected to the detection node E, and the other end of the first resistor R1 is connected to the reference power supply end.
  • the other end of the second resistor R2 is connected to the detection node E, and the other end of the third resistor R3 is connected to the reference power supply end.
  • the reference power terminal is a ground terminal.
  • the first operational amplifier A1 may further include two power supply terminals, one of which is connected to the power supply terminal VCC, and the other power supply terminal may be connected to the reference power supply terminal.
  • the relatively large driving current of the laser included in the laser light source 20 passes through the first resistor R1 to generate a voltage drop, and the voltage Vi at one end of the first resistor R1 (that is, the detection node E) is transmitted to the first operational amplifier A1 through the second resistor R2
  • the non-inverting input end of the first operational amplifier A1 is amplified by N times and then output.
  • the N is the amplification factor of the first operational amplifier A1, and N is a positive number.
  • the magnification ratio N can make the value of the voltage Vfb output by the first operational amplifier A1 be an integer multiple of the value of the driving current of the laser.
  • the value of the voltage Vfb can be equal to the value of the driving current, so that the display control circuit 10 can determine the amplified driving voltage as the driving current of the laser.
  • the display control circuit 10, the drive circuit 301, the switch circuit 302, and the amplifier circuit 303 form a closed loop to realize the feedback adjustment of the driving current of the laser, so that the display control circuit 10 can pass the laser second
  • the difference between the luminance value and the first luminance value adjusts the driving current of the laser in time, that is, adjusts the actual luminance of the laser in time, avoids long-term COD failure of the laser, and improves the accuracy of laser luminescence control.
  • the laser light source 20 includes a blue laser 201 , a red laser 202 and a green laser 203 .
  • the blue laser 201 can be set at the L1 position
  • the red laser 202 can be set at the L2 position
  • the green laser 203 can be set at the L3 position.
  • the laser light at position L1 is transmitted once through the fourth dichroic film 604 , reflected once through the fifth dichroic film 605 , and enters the first lens 901 .
  • Pt represents the transmittance of the dichroic film
  • Pf represents the reflectance of the dichroic film or the fifth reflectance 904 .
  • the light efficiency of the laser light at the position L3 is the highest, and the light efficiency of the laser light at the position L1 is the lowest.
  • the maximum optical power Pb output by the blue laser 201 is 4.5 watts (W)
  • the maximum optical power Pr output by the red laser 202 is 2.5W
  • the maximum optical power Pg output by the green laser 203 is 1.5W. That is, the maximum optical power output by the blue laser 201 is the largest, followed by the maximum optical power output by the red laser 202 , and the maximum optical power output by the green laser 203 is the smallest.
  • the green laser 203 is therefore placed at the L3 position, the red laser 202 is placed at the L2 position, and the blue laser 201 is placed at the L1 position. That is, the green laser 203 is arranged in the optical path with the highest light efficiency, so as to ensure that the projection device can obtain the highest light efficiency.
  • the display control circuit 10 is further configured to recover the current control signal of the laser driving component corresponding to the laser when the difference between the second brightness value of the laser and the first brightness value of the laser is less than or equal to the difference threshold
  • the initial value is the magnitude of the PWM current control signal to the laser in the normal state. Therefore, when a COD failure occurs in the laser, it can be quickly identified, and measures to reduce the driving current can be taken in time to reduce the continuous damage of the laser itself and help it recover itself. The whole process does not require dismantling and human intervention, which improves the laser light source. The reliability of use ensures the projection display quality of laser projection equipment.
  • the embodiments of the present application may be applied to various types of projection devices.
  • the projection device in the embodiment of the present application is a device that can project images or videos on the screen.
  • the projection device can communicate with computers, radio and television networks, the Internet, video compact discs (VCD: Video Compact Disc), digitized Video disc (DVD: Digital Versatile Disc Recordable), game console, DV, etc. are connected to play the corresponding video signal.
  • VCD Video Compact Disc
  • DVD Digital Versatile Disc Recordable
  • game console Digital Versatile Disc Recordable
  • DV Digital Versatile Disc Recordable
  • Projection equipment is widely used in homes, offices, schools and entertainment venues, etc.
  • FIG. 6A is a schematic diagram of a screen corresponding to a projection device according to an embodiment of the present application.
  • the projection screen is used in movies, offices, home theaters, large conferences, etc., and is used to display images and video files. It can be set to different specifications and sizes according to actual needs; in some embodiments Among them, in order to make the display effect more in line with the user's viewing habits, the aspect ratio of the screen corresponding to the projection device is usually set to 16:9, and the laser component can project the playback content to the screen corresponding to the projection device, as shown in Figure 6A.
  • the white plastic screen is a typical diffuse scattering screen.
  • the diffuse scattering screen evenly scatters the incident light of the projection device in all directions, and the same image can be seen at every angle;
  • the diffuse reflection screen has an ultra-wide audio-visual range and soft image, but pay attention to the influence of external light and cluttered light; in an environment with external light and cluttered light, external light, cluttered light and reflected light will be scattered, reflected, and overlapped with reflected light, resulting in low image quality;
  • the usual diffuse screen can maximize its performance when used in a dedicated screening room with no external light and cluttered light.
  • the wall also has the characteristics of diffuse scattering, but due to the lack of color correction and no light absorption, the image displayed when used as a screen will have color inaccuracy, dispersion, vignetting in dark parts, brightness and contrast Not enough and so on; so the wall as a screen is not a good choice.
  • the glass bead screen is a typical regression screen. Since the glass beads on the screen will reflect the light back around the incident direction of the projected light, you can see bright and vivid images at the usual viewing position; Due to the diffuse scattering of the screen, the brightness of the image seen near the front of the screen will be different from that seen at a position with a larger angle; near the front of the screen, you can see an image with good brightness, contrast, and layers; In an environment with external light and cluttered light, since the screen reflects it back along the incident direction of external light and cluttered light, the image light of the projection device and external light and cluttered light seldom overlap, so that bright colors can be obtained. image.
  • a screen with a wide viewing angle and low gain should be selected when there are a large number of users or when viewing in a horizontal direction; a screen with a narrow viewing angle and high gain should be selected for viewing in a narrow space; choosing a screen with appropriate gain will help improve the performance of the screen. Contrast, increase the grayscale of the image, brighten the color, and increase the visibility of the image; places with good shading and light absorption can use diffuse reflection and regression screens, and regression screens can be selected in the family living room; projector desktop placement can be selected Any screen, while the projector can be mounted on a ceiling with a diffuse reflection or a screen with a large half-value angle.
  • the screen corresponding to the projection device has a dark edge line at its periphery. Because the edge line usually has a certain width, the dark edge line can also be called an edge band.
  • the projection device provided by the application, And the projection display control method of automatically entering the screen area will utilize the edge band characteristics of the screen to accurately identify the screen in the environment stably and efficiently, and realize the rapid and automatic screen entry after the projection device moves.
  • the screen is shown in Figure 6A.
  • the projection device provided by the present application is equipped with a camera. After the user moves the projection device, the controller will control the camera to take pictures of the environment where the projection device is located so that the laser component can accurately project the playback content to the projection area of the screen again.
  • the position of the screen projection area can be determined, that is, the controller controls the camera of the projection device to obtain the first image of the area where the projection device corresponds to the screen, which can reduce the calculation of the subsequent algorithm to identify the screen quantity.
  • the first image will contain a variety of environmental objects, such as curtains, TV cabinets, walls, ceilings, coffee tables, etc., and the projection equipment provided by this application will contain dark curtain edges, as shown in Figure 6B. Show;
  • the controller of the projection device analyzes and processes the first image, uses an algorithm to identify the screen from the above-mentioned environmental factors contained in the first image, and controls the projection orientation of the laser component to accurately project the playback content to the projection of the above-mentioned screen area.
  • the controller in order to more easily and accurately identify the curtain projection area in the environmental factors in the first image, the controller will obtain the most suitable binary threshold to obtain the corresponding second image through binarization of the first image; by obtaining the above-mentioned most suitable binarization threshold, the contour extraction of each environmental element in the second image can be extracted as clearly as possible, which is convenient for subsequent algorithms Extraction of closed contours.
  • the controller generates its corresponding grayscale distribution based on the acquired first image, that is, a grayscale image, as shown in FIG. 6C, the right image is the grayscale image corresponding to the first image;
  • the controller will determine the grayscale value corresponding to the maximum brightness ratio in the grayscale image of the first image.
  • the 255 grayscale image can reflect the proportion of the brightest part of the first image in the entire image, such as Figure 6C
  • the brightest part of the grayscale image is assumed to be at 130.
  • the controller centers on the above-mentioned obtained gray-scale value, selects a preset number of gray-scale values within the preset range above and below the gray-scale value as thresholds, and repeatedly binarizes the first image until the second The extraction of the typical characteristics of the curtain from the second image meets the preset conditions, and the second image is obtained. It can be understood that the gray scale value when the second image is obtained is the binarization threshold that should be selected.
  • the starting point of the binarization threshold of the first image can be tentatively set at 130; , 136, 138, and 140 are the binarization thresholds to respectively binarize the first image to obtain a plurality of binarized images;
  • the valued image is identified as the second image; the curtain feature is the combination of the dark curtain edge band and the white curtain projection area.
  • the valued image is shown in Figure 6D.
  • the binarization threshold can be selected as a fixed value; however, sometimes the effect is not good for the scene where the camera of the projection device takes pictures and extracts the curtain area, because the shooting environment is not suitable for The final picture imaging has a huge impact. If you choose a higher threshold at night, it may cause most areas to be binarized into edges.
  • the most accurate way to obtain the second image by binarizing the first image is to traverse all the thresholds, that is, the threshold traverses 0-255, and performs edge image analysis on the acquired binarized image to obtain the binary image with the best edge image effect.
  • the binarization of the complete traversal method will lead to a large amount of calculation; therefore, the projection display control method that is automatically put into the screen area provided by this application adopts the brightness analysis of the grayscale image to create a threshold optimal interval, and at this threshold Do traversal detection in the preferred interval to obtain the optimal binarized image.
  • the controller will identify and extract the closed contour contained in the second image.
  • the closed contour also contains a secondary closed contour, the closed contour can be temporarily identified. The combination of closed contours matches the color characteristics of the curtain to a certain extent.
  • the outer edge lines of the curtain edge band will be recognized as a closed contour with a large range, which can also be called a first-level closed contour; the curtain edge band
  • the inner edge line of will be identified as a closed contour with a smaller range, which can also be called a secondary closed contour, that is, the controller will determine the first-level closed contour contained in the second image, and the first-level closed contour contains a second-level closed contour Close the outline.
  • the controller analyzes and recognizes the closed contours corresponding to each environmental element image in the image, and finds the closed contours with the above-mentioned hierarchical relationship as the candidate curtain projection area, that is, the controller All the first-level closed contours containing the second-level closed contours in the second image will be obtained first, and the single-level closed contours that do not contain the secondary closed contours will be eliminated.
  • the first-level closed contour can also be called the parent closed contour
  • the second-level closed contour contained in it can also be called the child closed contour, that is, the first-level closed contour and the second-level closed contour have a parent-child relationship; it can be understood that in the second image Among the multiple parent closed contours that contain child closed contours, there must be a closed contour that corresponds to the curtain, that is, only the area containing the parent and child closed contours can be used as a candidate area for the projection area of the curtain.
  • A is the first-level closed contour, that is, the parent closed contour
  • B, C, and D are all A contours containing The second-level closed contour of , that is, the sub-closed contour
  • the controller takes the second-level closed contours of B, C, and D as the candidate area of the screen projection area, and continues to carry out algorithm identification.
  • the controller performs convex quadrilateral identification and determination on the obtained secondary closed contour in the above candidate area, and if the secondary closed contour is a convex quadrilateral, the controller will identify the secondary closed contour as corresponding to the curtain.
  • the projection area that is, the projection area surrounded by the dark edge band of the screen, controls and controls the projection of the laser component of the projection device to the secondary closed contour, so that the playback content is accurately projected and covers the projection area corresponding to the screen.
  • the controller performs binarization on the first image to obtain the second image, and then extracts the information contained in it based on the second image.
  • Various closed contours in order to better reflect the environmental elements corresponding to the closed contours.
  • the furniture, home appliances, objects, walls and other environmental elements in the living room in the first image can be well recognized in the binarized image as long as there is a contrast between their color and the surrounding environment.
  • the area of the edge of the curtain marked in the figure, the area of the white curtain inside the edge of the curtain, the area of the TV cabinet, the sofa, the coffee table, etc. can be accurately identified by the controller.
  • the controller will perform polygon fitting on the obtained secondary closed contours in the area that conforms to the above-mentioned hierarchical relationship, because the projection area of the screen is a standard rectangle, so the controller will fit the result to the secondary closed contour of the quadrilateral contour Determined as the candidate area of the curtain projection area; in this way, the closed contour will be proposed as a triangle, circle, pentagon, or other irregular closed contour, so that the subsequent algorithm can continue to identify the corresponding rectangular closed contour of the curtain projection area.
  • the controller After identifying multiple secondary closed contours of quadrilaterals, the controller will determine the concavity and convexity of multiple candidate areas, that is, determine the concavity and convexity of multiple secondary closed contours of quadrilaterals, and determine the candidate areas that are convex quadrilaterals as corresponding to the curtain projection area.
  • a concave quadrilateral is different from a convex quadrilateral in that there is only one angle greater than 180° but less than 360°; among the remaining three angles, the two adjacent angles to the largest angle must be acute angles; the opposite angle of the largest angle can be an acute angle, Right angle or obtuse angle; the angle outside the figure on the upper side of the largest angle is equal to the sum of the other three interior angles; a convex quadrilateral is a quadrilateral with no angle greater than 180, and the straight line on any side does not pass through other line segments, that is, the other three sides are on the fourth On the side of the straight line where the side is located, the sum of any three sides is greater than the fourth side.
  • the controller will perform polygon fitting on the three secondary closed contours B, C, and D.
  • B is recognized as a quadrilateral closed contour
  • C is recognized as a pentagonal closed contour
  • D is recognized as a nearly circular closed contour; therefore, the quadrilateral closed contour of B will continue to be retained as the secondary closed contour corresponding to the projection area of the curtain Candidate area, closed contours of C and D will be eliminated in the candidate area.
  • the controller judges the concavity and convexity of the obtained closed contour of the B quadrilateral; because the screen can only be a convex quadrilateral in reality, the controller can judge the concavity and convexity through the concave-convex quadrilateral judgment algorithm, which can be implemented, for example for:
  • the projection device can improve the accuracy of the screen projection area recognition, and can recognize whether the screen is solid color or not; when the projection device plays any picture, the screen projection area can be extracted.
  • the third-level closed contour corresponds to the image generated by playing the content
  • the controller detects that the second-level closed contour contains a third-level closed contour
  • the controller The third-level closed contour will not be extracted and analyzed, so as to ensure that the projection device is moved when the projection device is working, and the projection device can still automatically enter the screen, and its projection content can be accurately put into the projection area of the screen.
  • the dark edge band of the screen corresponds to the first-level closed contour identified by the controller
  • the white screen area inside the dark edge band corresponds to the second-level convex quadrilateral identified by the controller. Close the outline.
  • the controller takes the closed contour with the largest area in the first image as the identification condition of the screen projection area.
  • the projection device After the projection device obtains the first image of the environment, it can obtain the second image by selecting a fixed binarization threshold, such as binarizing the first image at 20% brightness.
  • a fixed binarization threshold such as binarizing the first image at 20% brightness.
  • the area does not form a closed contour, which leads to obvious errors in the subsequent closed contour extraction; then find the closed contour with the largest area among all closed contours, and determine whether the internal color of the closed contour is consistent; if it is determined that the internal color of a certain closed contour is consistent and If the area is the largest, the closed contour is determined to be the projection area of the curtain, as shown in FIG. 6F .
  • the real-time projection device can extract accurate closed contours, but when there is a large area of solid-color closed contour area in the first image of the shooting screen, the final result obtained by the algorithm may be biased, as shown in Figure 6F. Large-area solid-color wall area.
  • the present application also provides a projection display control method for automatically putting the content into the screen area.
  • the method includes: based on the acquisition time The brightness analysis of the grayscale image of the first image, the first image is binarized to obtain the second image, the first image is an environmental image; the first-level closed contour contained in the second image is determined, and the first-level closed The contour contains a secondary closed contour; when it is determined that the secondary closed contour is a convex quadrilateral, project to the secondary closed contour to cover the projection area corresponding to the screen; wherein, the screen contains Curtain edge band, the projection area is surrounded by the curtain edge band, and the curtain is used to display the projection of playing content.
  • the binarization of the first image to obtain the second image based on the brightness analysis of the gray scale image of the first image at the time of acquisition includes: determining the gray scale value corresponding to the maximum brightness ratio in the gray scale image of the first image ; centering on the gray scale value, select a preset number of gray scale values as thresholds to repeatedly binarize the first image, and use the binarized image whose typical characteristics of the curtain are extracted to meet the preset conditions as the second image.
  • determining in the second image that contains the secondary closed contour as the projection area corresponding to the curtain specifically includes: acquiring all primary closed contours that contain the secondary closed contour in the second image; The contour is fitted with a polygon, and the fitting result is determined as a secondary closed contour of a quadrilateral contour as a candidate area for the projection area of the curtain; the concavity and convexity of the candidate area for the projection area of the screen are determined, and the candidate area for the convex quadrilateral is determined as the candidate area for the projection area of the curtain.
  • the projection area corresponding to the curtain is: acquiring all primary closed contours that contain the secondary closed contour in the second image; The contour is fitted with a polygon, and the fitting result is determined as a secondary closed contour of a quadrilateral contour as a candidate area for the projection area of the curtain; the concavity and convexity of the candidate area for the projection area of the screen are determined, and the candidate area for the convex quadrilateral is determined as the candidate area for the
  • the method further includes: the second-level closed outline may also include a third-level closed outline generated by playing content In the case of contours, no extraction analysis is performed on the tertiary closed contours.
  • acquiring the first image of the environment where the projection device is located specifically includes the controller acquiring the first image of the area where the screen is located.
  • the laser light emitted by the projection device is reflected by the nanoscale mirror of the digital micromirror device (DMD:Digital Micromirror Device) chip, wherein the optical lens is also a precision element, and when the image plane and the object plane are not parallel, it will cause The image projected to the screen is geometrically distorted.
  • DMD Digital Micromirror Device
  • FIG. 7A is a schematic diagram of a system framework for realizing display control by a projection device according to an embodiment of the present application.
  • the projection device provided by the present application has the characteristics of telephoto micro-projection.
  • the projection device includes a controller, and the controller can control the display of the optical-mechanical image through a preset algorithm, so as to realize automatic keystone correction of the display image. , automatic screen entry, automatic obstacle avoidance, automatic focus, and anti-eye shooting functions.
  • the projection device can realize flexible position movement in the telephoto micro-projection scene through the display control method based on geometric correction provided by this application;
  • the controller can control the projection device to realize the automatic display correction function, so that it can automatically return to normal display.
  • the geometric correction-based display control system provided by the present application includes an application program service layer (APK Service: Android application package Service), a service layer, and an underlying algorithm library.
  • API Service Android application package Service
  • the application service layer is used to realize the interaction between the projection device and the user; based on the display of the user interface, the user can configure various parameters of the projection device and the display screen, and the controller coordinates and calls the algorithm services corresponding to various functions , which can realize the function of automatically correcting the display screen of the projection device when the display is abnormal.
  • the service layer can include correction service, camera service, time of flight (TOF: Time of Flight) service, etc., and the above service can focus on the application program service layer (APK Service) to realize the corresponding specific functions of different service configurations of the projection device; Layer-down docking algorithm library, camera, time-of-flight sensor and other data acquisition services, to realize the function of encapsulating the complex logic of the bottom layer and transmitting the business data to the corresponding service layer.
  • API Service application program service layer
  • the underlying algorithm library can provide correction services and control algorithms for various functions of the projection device.
  • the algorithm library can complete various mathematical operations based on OpenCV to provide basic capabilities for correction services.
  • OpenCV is a cross-platform computer vision and machine learning software library released based on BSD license (open source), which can run on operating systems such as Linux, Windows, Android and Mac OS.
  • the projection device is also equipped with a gyroscope sensor; during the movement of the projection device, the gyroscope sensor can sense the position movement and actively collect movement data, and then send the collected data to the application service layer through the system framework layer , to support the application data required in the process of user interface interaction and application program interaction, and the collected data can also be used for data calling by the controller in the implementation of algorithm services.
  • the gyroscope sensor can sense the position movement and actively collect movement data, and then send the collected data to the application service layer through the system framework layer , to support the application data required in the process of user interface interaction and application program interaction, and the collected data can also be used for data calling by the controller in the implementation of algorithm services.
  • the time-of-flight service continues to send the data collected by the time-of-flight sensor to the application service layer of the projection device through the process communication framework (HSP Core), and the data will be used for the data call of the controller, and the user interface and program application interaction use.
  • HSP Core process communication framework
  • the projection device is configured with a camera for collecting images, and the camera can be implemented as a binocular camera, or a depth camera, etc.; the collected data will be sent to the camera service, and then the camera service will send the binocular camera
  • the collected image data is sent to the process communication framework (HSP Core) and/or projection device correction service for the realization of projection device functions.
  • HSP Core process communication framework
  • the projection device calibration service can receive the camera acquisition data sent by the camera service, and the controller can call corresponding control algorithms in the algorithm library for different functions that need to be implemented.
  • data interaction can be performed with the application program service through the process communication framework, and then the calculation result is fed back to the correction service through the process communication framework, and the correction service sends the obtained calculation result to the projection device operating system to generate corresponding control signaling, and send the control signaling to the optical-mechanical control drive to control the working conditions of the optical-mechanical and realize automatic correction of the display effect.
  • FIG. 7B is a schematic diagram of a signaling interaction sequence of a projection device implementing a radioactive eye function according to another embodiment of the present application.
  • the projection device provided by the present application can realize the anti-eye function.
  • the controller can control the user interface to display corresponding prompt information to remind the user to leave In the current area, the controller can also control the user interface to reduce the display brightness, so as to prevent the laser from causing damage to the user's eyesight.
  • the controller when the projection device is configured as a children's viewing mode, the controller will automatically turn on the anti-eye switch.
  • the controller controls the projection device to turn on the anti-eye switch.
  • the controller when the data collected by time-of-flight (TOF) sensors, camera devices and other devices triggers any preset threshold condition, the controller will control the user interface to reduce the display brightness, display prompt information, and reduce the optical-mechanical transmission power. , brightness, intensity, in order to protect the user's eyesight.
  • TOF time-of-flight
  • the projection device controller can control the calibration service to send signaling to the time-of-flight sensor to query the current device status of the projection device, and then the controller receives data feedback from the time-of-flight sensor.
  • the correction service can send a notification algorithm service to the process communication framework (HSP Core) to start the anti-eye process signaling;
  • the process communication framework (HSP Core) will call the service capability from the algorithm library to call the corresponding algorithm service, for example, it can include taking pictures Detection algorithm, screenshot algorithm, foreign object detection algorithm, etc.;
  • the process communication framework returns the foreign object detection result to the correction service based on the above algorithm service; for the returned result, if the preset threshold condition is reached, the controller will control the user interface to display prompt information and reduce the display brightness.
  • the signaling sequence is shown in the figure 7B.
  • the projection device when the anti-eye switch of the projection device is turned on, when the user enters a preset specific area, the projection device will automatically reduce the intensity of the laser emitted by the optical machine, reduce the display brightness of the user interface, and display safety prompt information.
  • the control of the projection device on the above-mentioned anti-eye function can be realized by the following methods:
  • the controller Based on the projection screen acquired by the camera, the controller uses an edge detection algorithm to identify the projection area of the projection device; when the projection area is displayed as a rectangle or a rectangle, the controller obtains the coordinate values of the four vertices of the above-mentioned rectangular projection area through a preset algorithm;
  • the perspective transformation method can be used to correct the projection area to be a rectangle, and the difference between the rectangle and the projection screenshot can be calculated to realize whether there are foreign objects in the display area; if the judgment result is that there are foreign objects, the projection device Automatically trigger the anti-eye function to start.
  • the difference between the camera content of the current frame and the camera content of the previous frame can be used to determine whether foreign objects have entered the area outside the projection area; if it is judged that foreign objects have entered, the projection
  • the device automatically triggers the anti-eye function.
  • the projection device can also use a time-of-flight (ToF) camera or a time-of-flight sensor to detect real-time depth changes in a specific area; if the depth value changes beyond a preset threshold, the projection device will automatically trigger the anti-eye function.
  • TOF time-of-flight
  • the projection device judges whether to enable the anti-eye function based on the collected time-of-flight data, screenshot data, and camera data analysis.
  • the controller performs depth difference analysis; if the depth difference is greater than the preset threshold X, when the preset threshold X is implemented as 0, it can be determined that there is a foreign object in a specific area of the projection device . If the user is located in the specific area and his vision is at risk of being damaged by the laser, the projection device will automatically activate the anti-eye function to reduce the intensity of the laser light emitted by the light machine, reduce the display brightness of the user interface, and display safety reminder information.
  • the projection device performs color addition mode (RGB) difference analysis based on the captured screenshot data, and if the color addition mode difference is greater than the preset threshold Y, it can be determined that there is a foreign object in a specific area of the projection device; If there are users in a specific area, their eyesight is at risk of being damaged by the laser, and the projection device will automatically activate the anti-eye function, reduce the intensity of the emitted laser light, reduce the brightness of the user interface display, and display the corresponding safety reminder information.
  • RGB color addition mode
  • the projection device obtains the projection coordinates according to the collected camera data, then determines the projection area of the projection device according to the projection coordinates, and further analyzes the difference of the color addition mode (RGB) in the projection area, if the difference of the color addition mode is greater than
  • the preset threshold Y can determine that there is a foreign object in a specific area of the projection device. If there is a user in the specific area, his vision may be damaged by the laser, and the projection device will automatically activate the anti-eye function to reduce the intensity of the emitted laser light. 1. Reduce the display brightness of the user interface and display the corresponding security prompt information.
  • the controller can still perform color additive mode (RGB) difference analysis in the extended area; if the color additive mode difference is greater than the preset threshold Y, it can be determined that there is a foreign object in the projection device If there is a user in the specific area, his vision may be damaged by the laser light emitted by the projection device.
  • the projection device will automatically activate the anti-eye function, reduce the intensity of the emitted laser light, reduce the brightness of the user interface display, and display the corresponding security information.
  • the prompt information is as shown in FIG. 7G.
  • FIG. 7C is a schematic diagram of a signaling interaction sequence of a projection device implementing a display image correction function according to another embodiment of the present application.
  • the projection device can monitor the movement of the device through a gyroscope or a gyroscope sensor.
  • the correction service sends a signaling to the gyroscope to query the status of the device, and receives a signaling from the gyroscope to determine whether the device is moving.
  • the display correction strategy of the projection device can be configured such that when the gyroscope and the time-of-flight sensor change simultaneously, the projection device triggers keystone correction first; after the gyroscope data stabilizes for a preset length of time, the controller starts the trigger Keystone correction; and the controller can also configure the projection device not to respond to the commands sent by the remote control buttons when the keystone correction is in progress; in order to cooperate with the realization of the keystone correction, the projection device will display a pure white image card.
  • the trapezoidal correction algorithm can construct the transformation matrix between the projection surface and the optical-mechanical coordinate system in the world coordinate system based on the binocular camera; further combine the optical-mechanical internal parameters to calculate the homography between the projection screen and the playing card, and use the homography to realize Arbitrary shape conversion between the projected screen and the playing card.
  • the correction service sends a signaling for informing the algorithm service to start the keystone correction process to the process communication framework (HSP CORE), and the process communication framework further sends a service capability call signaling to the algorithm service to obtain the capability corresponding algorithm;
  • the algorithm service obtains and executes the camera and picture algorithm processing service and the obstacle avoidance algorithm service, and sends them to the process communication framework in the form of signaling; in some embodiments, the process communication framework executes the above algorithms and feeds back the execution results to the Calibration service, the execution results may include successful photographing and successful obstacle avoidance.
  • the user interface will be controlled to display an error return prompt, and the user interface will be controlled to display keystone correction and auto focus charts again.
  • the projection device can identify the screen; and use the projection changes to correct the projection screen to be displayed inside the screen, so as to achieve the effect of aligning with the edge of the screen.
  • the projection device can use the time-of-flight (ToF) sensor to obtain the distance between the optical machine and the projection surface, based on the distance, find the best image distance in the preset mapping table, and use the image algorithm to evaluate the clarity of the projection screen. Based on this, the image distance can be fine-tuned.
  • ToF time-of-flight
  • the automatic keystone correction signaling sent by the correction service to the process communication framework may include other function configuration instructions, for example, it may include control instructions such as whether to implement synchronous obstacle avoidance, whether to enter a scene, and so on.
  • the process communication framework sends the service capability call signaling to the algorithm service, so that the algorithm service acquires and executes the auto-focus algorithm to realize the adjustment of the line-of-sight between the device and the screen; in some embodiments, after applying the auto-focus algorithm to realize the corresponding function, the algorithm
  • the service may also obtain and execute an automatic entry algorithm, which may include a keystone correction algorithm.
  • the projection device automatically enters the screen, and the algorithm service can set the 8-position coordinates between the projection device and the screen; and then through the autofocus algorithm again, the adjustment of the viewing distance between the projection device and the screen is realized; finally, the correction result Feedback to the correction service, and control the user interface to display the correction result, as shown in FIG. 7C.
  • the projection device uses an autofocus algorithm to obtain the current object distance by using its configured laser ranging to calculate the initial focal length and search range; then the projection device drives the camera (Camera) to take pictures, and uses the corresponding algorithm Perform clarity evaluation.
  • the projection device searches for the best possible focal length based on the search algorithm, then repeats the above steps of photographing and sharpness evaluation, and finally finds the optimal focal length through sharpness comparison to complete autofocus.
  • step 7D01 the projection device is started; in step 7D02, the user moves the device, and the projection device automatically completes the calibration and refocuses; in step 7D03, the controller will detect whether the auto focus function is enabled; when the auto focus function is not enabled, the controller will end Auto-focus business; step 7D04, when the auto-focus function is turned on, the projection device will obtain the detection distance of the time-of-flight (TOF) sensor through the middleware for calculation;
  • TOF time-of-flight
  • Step 7D05 the controller queries the preset mapping table according to the obtained distance to obtain the approximate focal length of the projection device;
  • step 7D06 the middleware sets the obtained focal length to the optical machine of the projection device;
  • step 7D07 after the optical machine emits laser light with the above focal length, the camera will execute the photographing instruction; in step 7D08, the controller judges whether the projection device is focused according to the obtained photographing result and evaluation function; if the judgment result meets the preset completion condition, then Control the autofocus process to end; step 7D09, if the judgment result does not meet the preset completion conditions, the middleware will fine-tune the focal length parameters of the projection device optical machine, for example, the preset step length can be used to gradually fine-tune the focal length, and the adjusted focal length parameter is set again to Optical mechanics; thereby realizing the steps of repeated photographing and sharpness evaluation, and finally finding the optimal focal length through sharpness comparison to complete autofocus, as shown in Figure 7D.
  • the projection device provided by the present application can implement a display correction function through a keystone correction algorithm.
  • two sets of external parameters between the two cameras and between the camera and the optical machine can be obtained, that is, the rotation and translation matrices; then the specific checkerboard chart is played through the optical machine of the projection device, and the projected checkerboard angle is calculated
  • Point depth value for example, solve the xyz coordinate value through the translation relationship between binocular cameras and the principle of similar triangles; then fit the projection surface based on the xyz, and obtain the rotation relationship and translation relationship with the camera coordinate system , which can specifically include pitch relationship (Pitch) and yaw relationship (Yaw).
  • the Roll parameter value can be obtained through the gyroscope configured on the projection device to combine the complete rotation matrix, and finally calculate the external parameters from the projection plane to the optical-mechanical coordinate system in the world coordinate system.
  • Step 7E01 the projection device controller obtains the depth value of the point corresponding to the pixel point of the photo, or the coordinates of the projection point in the camera coordinate system;
  • Step 7E02 through the depth value, the middleware obtains the relationship between the optical machine coordinate system and the camera coordinate system;
  • Step 7E03 the controller calculates the coordinate value of the projected point in the optical machine coordinate system
  • Step 7E04 obtaining the angle between the projection surface and the optical machine based on the coordinate value fitting plane
  • Step 7E05 obtain the corresponding coordinates of the projection point in the world coordinate system of the projection surface according to the angle relationship;
  • step 7E06 according to the coordinates of the map in the optical-mechanical coordinate system and the coordinates of the corresponding points on the projection surface of the projection plane, a homography matrix can be calculated.
  • Step 7E07 the controller judges whether an obstacle exists based on the above acquired data
  • Step 7E08 when obstacles exist, randomly select rectangular coordinates on the projection surface in the world coordinate system, and calculate the area to be projected by the optical machine according to the homography relationship;
  • Step 7E09 when the obstacle does not exist, the controller can obtain the feature points of the two-dimensional code, for example;
  • Step 7E10 obtaining the coordinates of the two-dimensional code on the prefabricated map card
  • Step 7E11 obtaining the homography relationship between the camera photo and the drawing card
  • Step 7E12 transforming the acquired coordinates of the obstacle into the map, and then obtaining the coordinates of the obstacle blocking map.
  • Step 7E13 according to the coordinates of the occlusion area of the obstacle map in the optical-mechanical coordinate system, the coordinates of the occlusion area of the projection surface are obtained through homography matrix transformation;
  • step 7E14 randomly select rectangular coordinates on the projection surface in the world coordinate system, avoid obstacles at the same time, and calculate the area to be projected by the optical machine according to the homography relationship.
  • the obstacle avoidance algorithm uses the algorithm (OpenCV) library to complete the contour extraction of foreign objects when selecting the rectangle step in the trapezoidal correction algorithm process, and avoids the obstacle when selecting the rectangle to realize the projection obstacle avoidance function.
  • OpenCV algorithm
  • Step 7F01 the middleware obtains the QR code image card captured by the camera
  • Step 7F02 identifying the feature points of the two-dimensional code, and obtaining the coordinates in the camera coordinate system
  • step 7F03 the controller further acquires the coordinates of the preset image card in the optical-mechanical coordinate system
  • Step 7F04 solving the homography relationship between the camera plane and the optical-mechanical plane
  • step 7F05 the controller identifies the coordinates of the four vertices of the curtain captured by the camera based on the above-mentioned homography;
  • step 7F06 according to the homography matrix, the range of the chart to be projected by the screen light machine is obtained.
  • the screen entry algorithm is based on the algorithm library (OpenCV), which can identify and extract the largest black closed rectangle outline, and judge whether it is a 16:9 size; project a specific picture card and use a camera to take photos, and extract more details in the photos.
  • OpenCV algorithm library
  • the corner points are used to calculate the homography between the projection surface (curtain) and the optical-mechanical display card, and the four vertices of the screen are converted to the optical-mechanical pixel coordinate system through homography, and the optical-mechanical graphic card is converted to the four vertices of the screen.
  • OpenCV algorithm library
  • the telephoto micro-projection projection equipment has the characteristics of flexible movement, and the projection screen may be distorted after each displacement.
  • the projection equipment provided by this application and based on geometric correction can automatically complete the correction for the above problems, including automatic keystone correction, automatic screen entry, automatic obstacle avoidance, automatic focus, anti-eye and other functions.
  • the beneficial effect of the embodiment of the present application is that by creating the first image, the acquisition of the environment image corresponding to the screen where the projection device is located can be achieved; further by creating the grayscale image of the first image, the projection device can improve the recognition of the closed contour corresponding to the environmental element
  • the accuracy rate further by constructing the first-level and second-level closed contours, the screening range of the candidate area of the screen projection area can be narrowed; further by judging that the second-level closed contour is a convex quadrilateral, the screen projection area contained in the candidate area can be identified, and the recognition can be improved.
  • the accuracy of the projection area of the screen prevents users from manually fine-tuning the projection angle, and realizes that after the projection device used with the screen is moved, its playback content can be automatically projected to the projection area of the screen.
  • FIG. 8 is a schematic diagram of the lens structure of the projection device 2 in some embodiments.
  • the lens 300 of the projection device 2 may further include an optical assembly 310 and a driving motor 320 .
  • the optical component 310 is a lens group composed of one or more lenses, which can refract the light emitted by the optical machine 200, so that the light emitted by the optical machine 200 can be transmitted to the projection surface to form a transmitted content image.
  • the optical assembly 310 may include a lens barrel and a plurality of lenses disposed in the lens barrel. According to whether the position of the lens can be moved, the lens in the optical assembly 310 can be divided into a movable lens 311 and a fixed lens 312, by changing the position of the movable lens 311, adjusting the distance between the movable lens 311 and the fixed lens 312, changing the overall optical assembly 310 focal length. Therefore, the driving motor 320 can drive the moving lens 311 to move its position by connecting with the moving lens 311 in the optical assembly 310 to realize the auto-focus function.
  • the focusing process described in some embodiments of the present application refers to changing the position of the moving lens 311 by driving the motor 320, thereby adjusting the distance between the moving lens 311 and the fixed lens 312, that is, adjusting the position of the image plane , so the imaging principle of the lens combination in the optical assembly 310, the adjustment of the focal length is actually the adjustment of the image distance, but in terms of the overall structure of the optical assembly 310, adjusting the position of the moving lens 311 is equivalent to adjusting the overall focal length adjustment of the optical assembly 310 .
  • the lens of the projection device 2 needs to be adjusted to different focal lengths so as to transmit a clear image on the projection surface.
  • the distance between the projection device 2 and the projection surface will require different focal lengths depending on the placement position of the user. Therefore, in order to adapt to different usage scenarios, the projection device 2 needs to adjust the focal length of the optical assembly 310 .
  • Fig. 9 is a schematic structural diagram of a distance sensor and a camera in some embodiments.
  • the projection device 2 may also have a built-in or external camera 700 , and the camera 700 may take images of images projected by the projection device 2 to obtain projection content images.
  • the projection device 2 checks the definition of the projected content image to determine whether the current lens focal length is appropriate, and adjusts the focal length if it is not appropriate.
  • the projection device 2 can continuously adjust the lens position and take pictures, and find the focus position by comparing the clarity of the front and rear position pictures, so as to adjust the moving lens 311 in the optical assembly. to a suitable location.
  • the controller 500 may first control the driving motor 320 to gradually move the moving lens 311 from the focus start position to the focus end position, and continuously obtain projected content images through the camera 700 during this period. Then, by performing definition detection on multiple projected content images, the position with the highest definition is determined, and finally the driving motor 320 is controlled to adjust the moving lens 311 from the focusing terminal to the position with the highest definition, and automatic focusing is completed.
  • FIG. 10 is a schematic flowchart of obstacle avoidance projection performed by a projection device in an embodiment of the present application.
  • the controller in the projection device 2 is configured as:
  • the projection device 2 Before the projection image projected by the projection device 2 in the projection area of the projection surface, the projection device 2 can automatically detect obstacles in the projection area, and project the projection image after determining that there is no obstacle in the projection area through the obstacle detection result, thereby realizing Automatic obstacle avoidance function. That is to say, if there is an obstacle in the projection area, the projection area of the projection device 2 before performing the obstacle avoidance process is different from the projection area after the obstacle avoidance process is completed. Specifically, it can be set that the projection device 2 receives the projection instruction, and in response to the received projection instruction, starts the automatic obstacle avoidance function.
  • the projection instruction refers to a control instruction used to trigger the projection device 2 to automatically perform an obstacle avoidance process.
  • the projection instruction may be an instruction actively input by the user. For example, after the power of the projection device 2 is turned on, the projection device 2 can project an image on the projection area on the projection plane. At this time, the user can press the preset automatic obstacle avoidance switch in the projection device 2, or the automatic obstacle avoidance button on the remote control of the projection device 2, so that the projection device 2 turns on the automatic obstacle avoidance function, and automatically clears the projection area of obstacles. detection.
  • the controller controls the optical machine 200 to project the white image card to the projection area on the projection surface in response to the projection instruction.
  • the camera 700 is controlled to take images of the projection surface. Because the image area of the projection surface image captured by the camera 700 is larger than the image area of the projection area. Therefore, in order to obtain the image of the projection area, that is, the projection content image, the controller is configured to: obtain the projection surface image captured by the camera 700, and calculate the four corner points and four edge midpoints of the projection area in the optical machine 200 coordinate system The coordinate value below. And the angle relationship between the projection plane and the optical machine 200 is obtained by fitting the plane based on the coordinate values.
  • the corresponding coordinates of the four corner points and the midpoints of the four edges in the world coordinate system of the projection surface are obtained according to the angle relationship.
  • the coordinate values of the four corner points and the midpoints of the four edges of the projection area in the optical machine coordinate system are converted into corresponding coordinate values in the camera coordinate system through the homography matrix.
  • the position and area of the projection area in the projection surface image are determined according to the coordinate values of the four corner points and the midpoints of the four edges in the camera coordinate system.
  • the controller uses an image contour detection algorithm based on the projected content image to obtain multi-contour area information.
  • the multi-contour area information includes the obstacle contour coordinate set.
  • the set of obstacle outline coordinates is used to represent a collection of multiple obstacle outline coordinates.
  • the contour level corresponding to the obstacle may be represented by contour parameters.
  • contour parameters include the index numbers of the next contour, previous contour, parent contour, and child contour. If there is no corresponding index number in the contour parameters of the obstacle, then assign the index number to a negative number (such as represented by -1).
  • profile A contains profile B, profile C, and profile D
  • profile A is the parent profile
  • profile B, profile C, and profile D are all child profiles of profile A.
  • contour position of contour C is at the top position of contour B, then contour C is the previous contour of contour B, and similarly, contour B is the next contour of contour C.
  • FIG. 11 is a schematic diagram of obstacle sets and outline levels in the embodiment of the present application.
  • the obstacle set includes five closed contours: contour1, contour2, contour2a, contour3 and contour4.
  • contour1 and contour2 are the outermost contours, that is, they are in the same hierarchical relationship, and they are set to level 0.
  • Contour contour2a is a subcontour of contour2, that is, contour2a is a level, set to level 1.
  • Contour3 and contour4 are sub-contours of contour2a, that is, contour3 and contour4 are at the same level, which is set to level 2. Therefore, the contour parameters for contour contour2 are characterized as [-1, 1, 2a, -1].
  • the controller is configured to: filter the obstacle set according to the outline level to obtain a first target set; wherein, the first target set includes at least one obstacle whose outline level is the outermost layer. That is to say, if there is an outsourcing or embedding relationship among the contour relationships among multiple obstacles, it is only necessary to extract the obstacle corresponding to the outermost contour. The purpose is that in the process of implementing the obstacle avoidance function, if the obstacle corresponding to the outermost contour is avoided, even if there is an obstacle corresponding to the inner contour corresponding to the outermost contour, it will also be avoided. Exemplarily, continuing to refer to FIG.
  • the contour with grade 0 is selected from the five closed contours of contour1, contour2, contour2a, contour3 and contour4, which is the outermost contour. Furthermore, a first target set is generated according to the outermost contour. Wherein, the first target set includes contour 1 and contour 2 .
  • the controller is configured to: update the first target set according to the image area of the projected content image to obtain a second target set, so as to determine the non-obstacle area according to the second target set.
  • the specific setting is: the controller acquires the center coordinates, width and height corresponding to each obstacle in the first target set. Calculate the obstacle area corresponding to the obstacle based on the center coordinates, width, and height. A first ratio of the obstacle area to the image area is calculated. If the first ratio is smaller than the first proportion threshold, delete the obstacles in the first target set, so as to generate the second target set according to the deleted first target set.
  • the first target set includes contour1 and contour2.
  • the area of the contour contour1 occupies 5 pixels
  • the area of the contour2 occupies 30 pixels
  • the area of the image occupies 100 pixels
  • the first ratio threshold is 1/4.
  • the first ratio corresponding to contour1 is 0.05
  • the first ratio corresponding to contour2 is 0.3. It can be seen that the area corresponding to contour1 is smaller than the image area.
  • the contour1 in the first target set is deleted to complete the update of the first target set.
  • the controller before the step of detecting the outline of obstacles on the projected content image, is configured to: perform grayscale processing on the projected content image to obtain a grayscale image.
  • the edge image in the gray image is extracted by edge detection algorithm.
  • the noise removal process is performed on the edge image to obtain a noise-removed image.
  • a threshold binarization algorithm is used to segment the noise-removed image to obtain a foreground image and a background image, so as to perform obstacle contour detection based on the foreground image and the background image.
  • the edge image is firstly subjected to expansion algorithm operation. That is, read the pixel point coordinates in the edge image in sequence and set the structural element and convolution kernel threshold, where the structural element is a 3 ⁇ 3 structural element such as a convolution kernel. Perform convolution calculation with all pixel point coordinates and the convolution kernel to obtain the first convolution result. If the first convolution result is greater than the convolution threshold, the pixel is set to 1, otherwise it is 0. In this way, when using the convolution kernel to sequentially traverse the pixels in the image, if the value of 1 appears in the convolution kernel, the pixel at the origin position of the corresponding convolution kernel in the edge image is assigned a value of 1, otherwise it is 0. Therefore, the slender image edge can be closed by dilation algorithm.
  • the structural elements may be structural diagrams of different sizes and ratios such as 3 ⁇ 3 and 5 ⁇ 5. This application only takes a 3 ⁇ 3 structure diagram and assigns 0 or 1 to a pixel as an example. According to the specific calculation logic and algorithm parameters, you can set the structural elements and assign values to the pixels.
  • the corrosion algorithm operation is performed on the expanded image. Specifically, the coordinates of the pixels in the expanded image are sequentially read, and the convolution calculation is performed with the convolution kernel to obtain the second convolution result. When the pixels in the second convolution result are all 1, the corresponding pixel in the expanded image is 1, otherwise it is 0. Then complete the removal of noise stains in the image after dilation. At the same time, the objects can be separated at the edge coordinate points in the thin edge image, and the boundary of the larger object can be smoothed without changing the area of the larger object.
  • the present application utilizes a threshold binarization algorithm to segment the noise-removed image.
  • the controller divides the noise-removed image into multiple image regions composed of adjacent pixel points. Computes the mean and variance of pixel values for an image region. Determine the pixel point threshold of the pixel points in the image area based on the mean value and the variance. Iterate through the pixels in the image area. If the pixel value of the pixel is greater than the pixel threshold, a foreground image is generated based on the area where the pixel is located. If the pixel value of the pixel is smaller than the pixel threshold, a background image is generated based on the area where the pixel is located.
  • the noise-removed image is divided into R ⁇ R image blocks to obtain m ⁇ n image blocks.
  • each image block corresponds to an image area.
  • Traverse the pixels of the current image block if the pixel is greater than the pixel threshold, set the current image block as the foreground image. If the pixel is smaller than the pixel threshold, the current image block is set as the background image.
  • the controller may perform the obstacle contour detection according to the foreground image and the background image. Or the controller can also perform contour detection of obstacles based only on the foreground image. Alternatively, the controller may also perform contour detection of obstacles based on the projected content image.
  • the controller acquires contour coordinates corresponding to each obstacle in the second target set.
  • the contour coordinates are removed from the projected content image, so as to determine the non-obstacle area according to the projected content image after the contour coordinates are removed.
  • the controller can remove all obstacles in the projected content image by acquiring the contour coordinates corresponding to all obstacles and removing the contour coordinates corresponding to all obstacles in the projected content image.
  • the area corresponding to the projected content image after removing all obstacles is determined as a non-obstacle area.
  • the non-obstacle area is a polygonal area.
  • FIG. 12 is a schematic diagram of the rectangular grid and the non-obstacle area in the embodiment of the present application.
  • the controller acquires corner coordinates of the projected image, where the corner coordinates are coordinates corresponding to four vertices and/or midpoints of four sides of the projected image.
  • a rectangular grid is constructed based on the corner point coordinates, and the rectangular grid includes M ⁇ N grids. Then, traverse all the grids, and judge the inclusion relationship between each grid and the non-obstacle area. If the grid is in a non-obstacle area, assign the grid ID of the grid to 1.
  • the controller can search the rectangular grid for a rectangular area formed by the grid whose grid identifier is 1, and determine the rectangular area as the pre-projection area. Furthermore, according to the shooting parameters of the camera 700, the pre-projection area in the projection image is converted to the projection area on the projection surface, and the optical machine 200 is controlled to project the playback content into the projection area to realize the automatic obstacle avoidance function.
  • the controller should search for the rectangular area formed by the grid whose grid ID is 1 in the rectangular grid.
  • the largest rectangular area formed by the grid that is, to obtain the largest rectangular area in the non-obstacle area.
  • all rectangular areas formed by grids whose grid identifier is 1 are traversed to obtain the number of pixels in each rectangular area. Extract the rectangular area with the largest number of pixels, and determine this rectangular area as the largest rectangular area in the non-obstacle area.
  • the controller calculates the area area of the rectangular area and the image of the projected image The second ratio of the area, and set the second ratio threshold. If the second ratio is greater than the second ratio threshold, it means that the area of the rectangular area satisfies the area area condition, and the rectangular area is determined as the pre-projection area.
  • the controller determines the pre-projection area, if the number of the largest rectangular areas to be searched is multiple, the maximum number of rectangular areas in multiple Extract a rectangular area with the center point of the projected figure as the extension baseline, so as to calculate the second ratio according to the extracted rectangular area.
  • the controller executes the process of updating the non-obstacle area. And extract the pre-projection area again in the updated non-obstacle area, so as to determine the projection area in the projection surface according to the pre-projection area.
  • the controller is configured to: calculate the distance between the barycenter coordinates of the obstacles in the second target set and the center coordinates of the projected content image. Extracting the first obstacle and the second obstacle in the second target set; wherein, the first obstacle is the obstacle with the smallest distance, and the second obstacle is the obstacle with the largest obstacle area. A third ratio of the obstacle area of the first obstacle to the obstacle area of the second obstacle is calculated. If the third ratio is smaller than the third ratio threshold, delete the first obstacle in the second target set, so as to generate a third target set according to the deleted second target set. Furthermore, the non-obstacle area is updated through the third target set. Wherein, the step of updating the non-obstacle area is the same as the above-mentioned step of determining the non-obstacle area based on the second target set, and will not be repeated here.
  • the controller before calculating the distance between the center coordinates of the obstacles and the center coordinates of the projected content image, is further configured to: sort the obstacles in the second target set by area to obtain the sorted Second set of targets.
  • the sorted second target set is [C 1 , C 2 , .
  • the process of updating the non-obstacle area is the process of updating the second target set.
  • the controller is configured to: calculate the credible degree parameter. to update the second target set according to the credibility parameter.
  • the credibility parameter is used to characterize the distance between the obstacle and the center of the projected image.
  • the value range of the reliability parameter is [0, 1]. If the value of the reliability parameter corresponding to the obstacle is larger, it means that the reliability of the obstacle is higher and the distance from the center of the projected image is smaller. Conversely, if the value of the reliability parameter corresponding to the obstacle is smaller, it means that the reliability of the obstacle is lower and the distance from the center of the projected image is larger.
  • the controller executes the image geometric rectangle algorithm to obtain the contour centroid corresponding to each obstacle in the second target set.
  • the confidence parameter corresponding to the obstacle can be obtained by performing the Euclidean distance calculation on the centroid of the contour corresponding to each obstacle.
  • the sorted second target set is [C 1 , C 2 , ..., C n ], and the one with the largest credibility parameter
  • the obstacle is obstacle C n-1 .
  • the obstacle C n-1 with the largest reliability parameter value and the obstacle C 1 with the largest obstacle area in the second target set are extracted.
  • Calculate the third ratio of the obstacle area of obstacle C n-1 to the obstacle area of obstacle C 1 if the third ratio is less than the third ratio threshold, it means that the area area corresponding to obstacle C n-1 is relative to the obstacle C 1 has a smaller obstacle area. Furthermore, the area corresponding to the obstacle C n-1 may not be treated as an obstacle.
  • the controller deletes the obstacle C n-1 in the second target set, thereby completing the update of the second target set.
  • the controller will perform the process of updating the second target set again until the pre-projection area is extracted in the non-obstacle area .
  • the present application only takes the selection of the obstacle with the largest reliability parameter as an example, and the obstacle with the value of the reliability parameter within the range of [0.5, 1] can also be selected.
  • the present application provides a projection device.
  • the controller in the projection device obtains the projection content image, performs obstacle contour detection on the projection content image, and obtains a set of obstacle contour coordinates.
  • the non-obstacle area in the projected content image is determined according to the obstacle outline coordinate set.
  • a pre-projection area is extracted in the non-obstacle area, and the pre-projection area is a rectangular area within the non-obstacle area. Calculate the projection area on the projection surface according to the pre-projection area and the shooting parameters of the camera 700 , and control the optical machine 200 to project the playback content to the projection area.
  • the projection device fails to detect obstacles or the projection area is small after obstacle detection, which reduces the user experience.
  • the present application proposes an obstacle avoidance projection method, which is applied to a projection device.
  • the projection device includes an optical machine, a camera, and a controller;
  • the obstacle avoidance projection method includes: obtaining a projection instruction input by a user; responding to the projection An instruction to acquire a projected content image; perform obstacle contour detection on the projected content image to obtain an obstacle contour coordinate set; determine a non-obstacle area in the projected content image according to the obstacle contour coordinate set; Extracting a pre-projection area in the non-obstacle area, the pre-projection area is a rectangular area within the non-obstacle area; calculating the projection area in the projection plane according to the pre-projection area and the shooting parameters of the camera , and controlling the optical machine to project the playback content to the projection area.
  • the method further includes: obtaining an obstacle set according to the obstacle outline coordinate set, the obstacle set includes at least one of the obstacles and a corresponding outline level; the outline level is used to characterize the obstacle The outsourcing or embedding relationship between; according to the outline level, the obstacle set is screened to obtain the first target set; wherein, the first target set includes at least one obstacle whose outline level is the outermost layer; The first target set is updated according to the image area of the projected content image to obtain a second target set, so as to determine the non-obstacle area according to the second target set.
  • the method further includes: in the step of updating the first target set according to the image area of the projected content image, acquiring the center coordinates, width and Height; calculate the obstacle area corresponding to the obstacle according to the center coordinates, width and height; calculate the first ratio of the obstacle area to the image area; if the first ratio is less than the first ratio threshold , then delete the obstacles in the first target set, so as to generate the second target set according to the deleted first target set.
  • the method further includes: in the step of determining the non-obstacle area, acquiring the contour coordinates corresponding to each obstacle in the second target set; removing The contour coordinates are used to determine the non-obstacle area according to the projected content image after removing the contour coordinates.
  • the method further includes: in the step of extracting a pre-projection area in the non-obstacle area, acquiring a rectangular area in the non-obstacle area; traversing the number of pixels in the rectangular area; Extracting the rectangular area with the largest number of pixels, and calculating a second ratio of the area area of the rectangular area to the image area; if the second ratio is greater than a second ratio threshold, the rectangular area is determined is the pre-projection area.
  • the method further includes: calculating the distance between the centroid coordinates of obstacles in the second target set and the center coordinates of the projected content image; extracting the first obstacle and The second obstacle; wherein, the first obstacle is the obstacle with the smallest distance, and the second obstacle is the obstacle with the largest obstacle area; calculate the obstacle area of the first obstacle and the third ratio of the obstacle area of the second obstacle; if the third ratio is smaller than the third ratio threshold, then delete the first obstacle in the second target set, so as to obtain the The second target set is used to generate a third target set.
  • the method further includes: if the second ratio is smaller than the second ratio threshold, updating the non-obstacle area according to the third target set; the updated non-obstacle area extracting the pre-projection area again, so as to determine the projection area in the projection plane according to the pre-projection area.
  • the method further includes: before the step of performing obstacle contour detection on the projected content image, performing grayscale processing on the projected content image to obtain a grayscale image; using an edge detection algorithm to extract the The edge image in the grayscale image; the edge image is denoised to obtain an image after denoising; the threshold binarization algorithm is used to segment the denoised image to obtain a foreground image and a background image, to obtain a foreground image and a background image according to the Contour detection of obstacles is performed using the foreground and background images described above.
  • the method further includes: in the step of segmenting the noise-removed image using a threshold binarization algorithm, segmenting the noise-removed image into a plurality of image regions composed of adjacent pixel points ; calculate the pixel value mean and variance of the image area; determine the pixel threshold of the pixel in the image area based on the mean and variance; traverse the pixels in the image area; if the pixel If the pixel value is greater than the pixel threshold, the foreground image is generated based on the area where the pixel is located; if the pixel value of the pixel is less than the pixel threshold, then the foreground image is generated based on the area where the pixel is located Describe the background image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Projection Apparatus (AREA)
  • Automatic Focus Adjustment (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请涉及显示设备技术领域,特别地,涉及一种投影设备及显示控制方法,一定程度上可以解决投影设备位置调整后需手动微调投影角度、或投影仪将墙壁等大面积纯色区域误识别为幕布,导致播放内容不能准确投影至幕布投影区域的问题,所述投影设备包括:投影组件;控制器,被配置为:基于获取的第一图像灰阶图亮度分析、二值化得到第二图像;确定第二图像所包含的一级闭合轮廓,所述一级闭合轮廓内含有二级闭合轮廓;在判定所述二级闭合轮廓为凸四边形时,将播放内容投影至二级闭合轮廓,二级闭合轮廓对应于所述幕布的投影区域;其中,幕布包含对应于所述一级闭合轮廓的幕布边缘带,所述投影区域被所述幕布边缘带围绕。

Description

一种投影设备及显示控制方法
相关申请的交叉引用
本申请要求在2021年11月16日提交、申请号为202111355866.0;在2022年01月05日提交、申请号为202210006233.7;在2022年05月25日提交、申请号为202210583357.1的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及显示设备技术领域,特别地,涉及一种投影设备及显示控制方法。
背景技术
投影设备是一种可以将图像、或视频投射到幕布上显示的设备,可通过不同接口同计算机、VCD、DVD、BD、游戏机、DV、广电信号源、视频信号源等连接,播放相应的视频信号。
在一些将不播放内容投影至幕布投影区域的显示控制实现中,首相投影设备将采集获取幕布区域的影像;然后将已获取影像进行二值化图像处理,以使得图像中物体轮廓可显示更加清晰;最后投影设备基于二值化图像,提取其中包含的所有闭合轮廓,并将其中面积最大、且内部颜色一致的闭合轮廓认定为幕布投影区域。或者,当待投影幕布周边存在大面积纯色墙壁、且墙壁边缘构成闭合轮廓时,投影设备将识别该墙壁为幕布,导致播放内容投影至该墙壁、而没能够投影至指定幕布。
发明内容
第一方面,本申请提供了一种投影设备,包括:投影组件,用于将播放内容投影至所述投影设备对应的幕布;摄像头,用于获取图像;控制器,被配置为:基于摄像头获取的第一图像灰阶图亮度分析,将第一图像二值化得到第二图像;确定所述第二图像所包含的一级闭合轮廓,所述一级闭合轮廓内含有二级闭合轮廓;在判定所述二级闭合轮廓为凸四边形时,控制所述投影组件将播放内容投影至所述二级闭合轮廓,所述二级闭合轮廓对应于所述幕布的投影区域;其中,所述幕布包含对应于所述一级闭合轮廓的幕布边缘带,所述投影区域被所述幕布边缘带围绕。
第二方面,本申请还提供一种用于投影设备的投影显示控制方法,所述方法包括:基于所述投影设备的摄像头获取的第一图像灰阶图亮度分析,将第一图像二值化得到第二图像,所述第一图像为环境图像,所述投影设备包括:用于将播放内容投影至所述投影设备对应的幕布的投影组件;确定所述第二图像所包含的一级闭合轮廓,所述一级闭合轮廓内含有二级闭合轮廓;在判定所述二级闭合轮廓为凸四边形时,将播放内容投影至所述二级闭合轮廓,所述二级闭合轮廓对应于幕布的投影区域;其中,所述幕布包含对应于所述一级闭合轮廓的幕布边缘带,所述投影区域被所述幕布边缘带围绕。
附图说明
图1A为本申请一实施例投影设备的摆放示意图;
图1B为本申请一实施例投影设备光路示意图;
图2为本申请一实施例投影设备的电路架构示意图;
图3为本申请一实施例投影设备的结构示意图;
图4为本申请另一实施例投影设备的结构示意图;
图5为本申请一实施例投影设备的电路结构示意图;
图6A为本申请一实施例投影设备对应幕布的示意图;
图6B为本申请另一实施例投影设备所在环境第一图像的示意图;
图6C为本申请另一实施例第一图像及其对应灰阶图的示意图;
图6D为本申请一实施例投影设备所在环境图像二值化后的第二图像示意图;
图6E为本申请一实施例幕布对应闭合轮廓的二值化示意图;
图6F为本申请一实施例投影设备将大面积纯色墙壁认定为幕布投影区域的示意图;
图6G为本申请一实施例凹、凸四边形的示意图;
图7A为本申请一实施例投影设备实现显示控制的系统框架示意图;
图7B为本申请另一实施例投影设备实现放射眼功能的信令交互时序示意图;
图7C为本申请另一实施例投影设备实现显示画面校正功能的信令交互时序示意图;
图7D为本申请另一实施例投影设备实现自动对焦算法的流程示意图;
图7E为本申请另一实施例投影设备实现梯形校正、避障算法的流程示意图;
图7F为本申请另一实施例投影设备实现入幕算法的流程示意图;
图7G为本申请另一实施例投影设备实现防射眼算法的流程示意图。
图8为本申请实施例中投影设备的镜头结构示意图;
图9为本申请实施例中投影设备的距离传感器和相机结构示意图;
图10为本申请实施例中投影设备进行避障投影的流程示意图;
图11为本申请实施例中障碍物集合以及轮廓层级的示意图;
图12为本申请实施例中矩形网格和非障碍物区域的示意图;
图13为本申请实施例中投影设备生成第二目标集合的示意图;
图14为本申请实施例中投影设备生成第三目标集合的示意图。
具体实施方式
为使本申请的目的和实施方式更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施方式进行清楚、完整地描述,显然,描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。
图1A为本申请一实施例投影设备的摆放示意图。
在一些实施例中,本申请提供的一种投影设备包括投影屏幕1和用于投影的装置2。投影屏幕1固定于第一位置上,用于投影的装置2放置于第二位置上,使得其投影出的画面与投影屏幕1吻合,该步骤为专业售后技术人员操作,也即第二位置为用于投影的装置2的最佳摆放位置。
图1B为本申请一实施例投影设备光路示意图。
投影设备的发光部件可实施为激光、或LED等光源,下文中将以激光类型的投影设备为例,对本申请提供的投影设备、及自动投入幕布区域的投影显示控制方案进行阐述。
在一些实施例中,投影设备可包括激光光源100,光机200,镜头300,投影介质400。其中,激光光源100为光机200提供照明,光机200对光源光束进行调制,并输出至镜头300进行成像,投射至投影介质400形成投影画面。
在一些实施例中,投影设备的激光光源包括投影组件和光学镜片组件,所述投影组件在本申请提供的激光类型投影设备中具体实施为激光器组件,下文中不再赘述。
所述激光器组件发出的光束可透过光学镜片组件进而为光机提供照明。其中,例如,光学镜片组件 需要较高等级的环境洁净度、气密等级密封;而安装激光器组件的腔室可以采用密封等级较低的防尘等级密封,以降低密封成本。
在一些实施例中,投影设备的光机200可实施为包括蓝色光机、绿色光机、红色光机,还可以包括散热系统、电路控制系统等。需要说明的是,在一些实施例中,投影设备的发光部件还可以通过LED光源实现。
在一些实施例中,本申请提供了一种投影设备,包括三色光机和控制器;其中,三色光机用于调制生成用户界面包含像素点的激光,包括蓝色光机、绿色光机和红色光机;控制器被配置为:获取用户界面的平均灰度值;判定所述平均灰度值大于第一阈值、且其持续时间大于时间阈值时,控制所述红色光机的工作电流值按照预设梯度值降低,以减小所述三色光机的发热。可以发现,通过降低三色光机中所集成红色光机的工作电流,可以实现控制所述红色光机的过热,以实现控制三色光机、及投影设备的过热。
光机200可实施为三色光机,所述三色光机集成蓝色光机、绿色光机、红色光机。
下文中将以投影设备的光机200实施为包括蓝色光机、绿色光机、红色光机为例,对本申请提供的实施方式进行阐述。
在一些实施例中,投影设备的光学系统由光源部分和光机部分组成,光源部分的作用是为光机提供照明,光机部分的作用是对光源提供的照明光束进行调制,最后通过镜头出射形成投影画面。
在一些实施例中,光源部分具体包括壳体、激光器组件以及光学镜片组件,激光器组件出射的光束通过光学镜片组件进行整形合光,从而为光机提供照明。其中,激光器组件包括发光芯片,准直透镜,导线等多种器件,但通常为已封装好的组件,作为组件进行使用时,相比于光学镜片也作为精密部件,光学镜片对环境的洁净度要求会更高,因为如果镜片表面积灰,一方面会影响镜片对光的处理效果,导致出射的光亮度衰减,最终影响投影设备通过镜头投出图像的效果,另一方面,灰尘会吸收高能的激光光束形成热,极易使镜片发生损坏。
在一些实施例中,光学镜片组件至少包括凸透镜,其中凸透镜为望远镜系统的组成部分,望远镜系统通常由一片凸透镜和一片凹透镜组成,用于较大面积的激光光束进行缩束,形成较小面积的激光光束。凸透镜通常面型较大,通常设置于靠近激光器出光处,可以接收大面积的激光光束,也便于作为大的窗口进行光束的透过,减小光损。
光学镜片组件还可以包括凹透镜,合光镜,匀光部件,或者消散斑部件等,用于对激光光束进行再次整形合光,满足照明系统需求。
在一些实施例中,激光器组件包括红色激光器模组、绿色激光器模组、蓝色激光器模组、各个激光器模组与相应安装口均通过密封圈(采用氟橡胶或其他密封材料皆可)防尘密封安装。
图2为本申请一实施例投影设备的电路架构示意图。
在一些实施例中,本公开提供的投影设备包括多组激光器,通过在激光光源的出光路径中设置亮度传感器,亮度传感器可以检测激光光源的的第一亮度值,并将第一亮度值发送至显示控制电路。
该显示控制电路可以获取每个激光器的驱动电流对应的第二亮度值,并在确定该激光器的第二亮度值与该激光器的第一亮度值的差值大于差值阈值时,确定该激光器发生COD(Catastrophic optical damage)故障;则显示控制电路可以调整激光器的对应的激光器驱动组件的电流控制信号,直至该差值小于等于该差值阈值,从而消除该蓝色激光器的COD故障;该投影设备能够及时消除激光器的COD故障,降低了激光器的损坏率,确保了投影设备的图像显示效果。
在一些实施例中,该投影设备可以包括显示控制电路10、激光光源20、至少一个激光器驱动组件30以及至少一个亮度传感器40,该激光光源20可以包括与至少一个激光器驱动组件30一一对应的至 少一个激光器。其中,该至少一个是指一个或多个,多个是指两个或两个以上。
在一些实施例中,投影设备包括激光器驱动组件30和一个亮度传感器40,相应的,该激光光源20包括与激光器驱动组件30一一对应的三个激光器,该三个激光器可以分别为蓝色激光器201、红色激光器202和绿色激光器203。其中,该蓝色激光器201用于出射蓝色激光,该红色激光器202用于出射红色激光,该绿色激光器203用于出射绿色激光。在一些实施例中,激光器驱动组件30可实施为包含多个子激光器驱动组件,分别对应不同颜色的激光器。
显示控制电路10用于向激光器驱动组件30输出基色使能信号以及基色电流控制信号,以驱动激光器发光,具体地,如图2所示,显示控制电路10与激光器驱动组件30连接,用于输出与多帧显示图像中的每一帧图像的三种基色一一对应的至少一个使能信号,将至少一个使能信号分别传输至对应的激光器驱动组件30,以及,输出与每一帧图像的三种基色一一对应的至少一个电流控制信号,将至少一个电流控制信号分别传输至对应的激光器驱动组件30。示例的,该显示控制电路10可以为微控制单元(microcontroller unit,MCU),又称为单片机。其中,该电流控制信号可以是脉冲宽度调制(pulse widthmodulation,PWM)信号。
在一些实施例中,该显示控制电路10可以基于待显示图像的蓝色基色分量输出与蓝色激光器201对应的蓝色PWM信号B_PWM,基于待显示图像的红色基色分量输出与红色激光器202对应的红色PWM信号R_PWM,基于待显示图像的绿色基色分量输出与绿色激光器203对应的绿色PWM信号G_PWM。显示控制电路可以基于蓝色激光器201在驱动周期内的点亮时长,输出与蓝色激光器201对应的使能信号B_EN,基于红色激光器202在驱动周期内的点亮时长,输出与红色激光器202对应的使能信号R_EN,基于绿色激光器203在驱动周期内的点亮时长,输出与绿色激光器203对应的使能信号G_EN。
激光器驱动组件30与对应激光器连接,用于响应于接收到的使能信号和电流控制信号,向其所连接的激光器提供对应的驱动电流,每个激光器用于在激光器驱动组件30提供的驱动电流的驱动下发光。
在一些实施例中,蓝色激光器201、红色激光器202和绿色激光器203分别与激光器驱动组件30连接。激光器驱动组件30可以响应于显示控制电路10发送的蓝色PWM信号B_PWM和使能信号B_EN,向该蓝色激光器201提供对应的驱动电流。该蓝色激光器201用于在该驱动电流的驱动下发光。
亮度传感器设置于激光光源的出光路径中,通常设置在出光路径的一侧,而不会遮挡光路。如图2所示,至少一个亮度传感器40设置在激光光源20的出光路径中,该每个亮度传感器与显示控制电路10连接,用于检测一个激光器的第一亮度值,并将第一亮度值发送至显示控制电路10。
在一些实施例中,显示控制电路10,还用于获取每个激光器的驱动电流对应的第二亮度值,若检测到该激光器的第二亮度值与该激光器的第一亮度值的差值大于差值阈值,表明该激光器发生COD故障,显示控制电路10可以调整激光器驱动组件30的电流控制信号,直至该差值小于等于该差值阈值,即通过降低激光器的驱动电流来消除该激光器的COD故障。具体地,第一亮度值和第二亮度值均表征为光输出功率值,其中第二亮度值可以是预先存储的,也可以是处于正常发光状态时的亮度传感器发回的亮度数值。如果激光器发生COD故障,通常是其光输出功率发生骤降,亮度传感器回传的第一亮度值会小于正常的第二亮度值的一半。当确认发生上述故障时,显示控制电路会减小与激光器对应的激光器驱动组件的电流控制信号,并不断采集亮度传感器回传的亮度信号并比较。
在一些实施例中,若检测到的该激光器的第二亮度值与该激光器的第一亮度值的差值小于等于差值阈值,表明该激光器未发生COD故障,则显示控制电路10无需调整与该激光器对应的激光器驱动组件30的电流控制信号。
其中,显示控制电路10中可以存储有电流与亮度值之间的对应关系。该对应关系中每个电流对应 的亮度值为激光器在该电流的驱动下正常工作(即在未发生COD故障)时,该激光器能够发出的初始亮度值。例如,该亮度值可以是激光器在该电流的驱动下工作时,其首次点亮时的初始亮度。
在一些实施例中,显示控制电路10可以从该对应关系中获取每个激光器的驱动电流对应的第二亮度值,该驱动电流为激光器当前的实际工作电流,该驱动电流对应的第二亮度值为激光器在该驱动电流的驱动下正常工作时能够发出的亮度值。该差值阈值可以为显示控制电路10中预先存储的固定数值。
在一些实施例中,显示控制电路10在调整与激光器对应的激光器驱动组件30的电流控制信号时,可以降低与激光器对应的激光器驱动组件30的电流控制信号的占空比,从而降低激光器的驱动电流。
在一些实施例中,亮度传感器40可以检测蓝色激光器201的第一亮度值,并将该第一亮度值发送至显示控制电路10。该显示控制电路10可以获取该蓝色激光器201的驱动电流,并从电流与亮度值的对应关系中获取该驱动电流对应的第二亮度值。之后检测第二亮度值与第一亮度值之间的差值是否大于差值阈值,若该差值大于差值阈值,表明该蓝色激光器201发生COD故障,则显示控制电路10可以降低与该蓝色激光器201对应的激光器驱动组件30的电流控制信号。之后显示控制电路10可以再次获取蓝色激光器201的第一亮度值,以及蓝色激光器201的驱动电流对应的第二亮度值,并在第二亮度值与第一亮度值之间的差值大于差值阈值时,再次降低与该蓝色激光器201对应的激光器驱动组件30的电流控制信号。如此循环,直至该差值小于等于差值阈值。由此通过降低蓝色激光器201的驱动电流,消除该蓝色激光器201的COD故障。
在一些实施例中,显示控制电路10可以根据至少一个亮度传感器40获取到的每一个激光器的第一亮度值,以及每个激光器的驱动电流对应的第二亮度值,实时监测每个激光器是否发生COD故障。并在确定任一个激光器发生COD故障时,及时消除该激光器的COD故障,减少激光器发生COD故障的持续时长,降低该激光器的损伤,确保投影设备的图像显示效果。
图3为本申请一实施例投影设备的结构示意图。
在一些实施例中,该投影设备中的激光光源20可以包括独立设置的蓝色激光器201、红色激光器202和绿色激光器203,该投影设备也可以称为三色投影设备,蓝色激光器201、红色激光器202和绿色激光器203均为MCL型封装激光器,其体积小,利于光路的紧凑排布。
在一些实施例中,参考图3,该至少一个亮度传感器40可以包括第一亮度传感器401、第二亮度传感器402和第三亮度传感器403,其中,第一亮度传感器401为蓝光亮度传感器或者白光亮度传感器,第二亮度传感器402为红光亮度传感器或者白光亮度传感器,该第三亮度传感器403为绿光亮度传感器或者白光亮度传感器。
其中,该第一亮度传感器401设置在蓝色激光器201的出光路径中,具体地,可以设置于蓝色激光器201准直光束的出光路径一侧,同理,该第二亮度传感器402设置在红色激光器202的出光路径中,具体地设置于红色激光器201准直光束的出光路径一侧,该第三亮度传感器403设置在绿色激光器203的出光路径中,具体地,设置于绿色激光器203准直光束的出光路径一侧。由于该激光器出射的激光在其出光路径中并未出现衰减,将亮度传感器设置在激光器的出光路径中,提高了亮度传感器对激光器第一亮度值检测的精度。
该显示控制电路10还用于在控制蓝色激光器201出射蓝色激光时,读取该第一亮度传感器401检测的亮度值。并在控制该蓝色激光器201关闭时,停止读取该第一亮度传感器401检测的亮度值。
该显示控制电路10还用于在控制红色激光器202出射红色激光时,读取该第二亮度传感器402检测的亮度值,并在控制红色激光器202关闭时,停止读取第二亮度传感器402检测的亮度值。
该显示控制电路10还用于在控制绿色激光器203出射绿色激光时,读取该第三亮度传感器403检测的亮度值,并在控制绿色激光器203关闭时,停止读取该第三亮度传感器403检测的亮度值。
需要说明的是,亮度传感器也可以为一个,设置于三色激光的合光路径中。
图4为本申请另一实施例投影设备的结构示意图。
在一些实施例中,投影设备还可以包括光导管110,光导管110作为集光光学部件,用于接收并匀化输出合光状态的三色激光。
在一些实施例中,亮度传感器40可以包括第四亮度传感器404,该第四亮度传感器404可以为白光亮度传感器。其中,该第四亮度传感器404设置在光导管110的出光路径中,比如设置于光导管的出光侧,靠近其出光面。以及,上述第四亮度传感器为白光亮度传感器。
该显示控制电路10还用于在控制蓝色激光器201、红色激光器202和绿色激光器203分时开启时,读取该第四亮度传感器404检测的亮度值,以确保该第四亮度传感器404可以检测到该蓝色激光器201的第一亮度值、该红色激光器202的第一亮度值和该绿色激光器203的第一亮度值。并在控制该蓝色激光器201、红色激光器202和绿色激光器203均关闭时,停止读取该第四亮度传感器404检测的亮度值。
在一些实施例中,在投影设备投影图像的过程中,该第四亮度传感器404一直处于开启状态。
在一些实施例中,参考图3和图4,该投影设备还可以包括第四二向色片604、第五二向色片605、第五反射镜904、第二透镜组件90、扩散轮150、TIR透镜120、DMD 130和投影镜头140。其中,该第二透镜组件90包括第一透镜901、第二透镜902和第三透镜903。该第四二向色片604可以透过蓝色激光,反射绿色激光。该第五二向色片605可以透过红色激光,反射绿色激光和蓝色激光。
该蓝色激光器201出射的蓝色激光透过第四二向色片604,再经过第五二向色片605反射进入第一透镜901聚光。红色激光器202出射的红色激光透过第五二向色片605直接进入第一透镜901聚光。绿色激光器203出射的绿色激光经过第五反射镜904反射,依次经过第四二向色片604和第五二向色片605反射后进入第一透镜901聚光。经过第一透镜901聚光后的蓝色激光、红色激光和绿色激光分时透过旋转的扩散轮150进行消散斑,并投射到光导管110匀光后,经过第二透镜902和第三透镜903整形后进入TIR透镜120全反射,并经过DMD130反射后再透过TIR透镜120,最后经过投影镜头140投射至显示屏幕上,形成需要显示的图像。
图5为本申请一实施例投影设备的电路结构示意图。
在一些实施例中,激光器驱动组件30可以包括驱动电路301、开关电路302和放大电路303。该驱动电路301可以为驱动芯片。该开关电路302可以为金属氧化物半导体(metal-oxide-semiconductor,MOS)管。
其中,该驱动电路301分别与开关电路302、放大电路303以及激光光源20所包括的对应的激光器连接。该驱动电路301用于基于显示控制电路10发送的电流控制信号通过VOUT端向激光光源20中对应的激光器输出驱动电流,并通过ENOUT端将接收到的使能信号传输至开关电路302。其中,该激光器可以包括串联的n个子激光器,分别为子激光器LD1至LDn。n为大于0的正整数。
开关电路302串联在激光器的电流通路中,用于在接收到的使能信号为有效电位时,控制电流通路导通。
放大电路303分别与激光光源20的电流通路中的检测节点E和显示控制电路10连接,用于将检测到的激光器组件201的驱动电流转换为驱动电压,放大该驱动电压,并将放大后的驱动电压传输至显示控制电路10。
显示控制电路10还用于将放大后的驱动电压确定为激光器的驱动电流,并获取该驱动电流对应的第二亮度值。
在一些实施例中,放大电路303可以包括:第一运算放大器A1、第一电阻(又称取样功率电阻)R1、第二电阻R2、第三电阻R3和第四电阻R4。
第一运算放大器A1的同相输入端(又称正端)与第二电阻R2的一端连接,第一运算放大器A1的反相输入端(又称负端)分别与第三电阻R3的一端和第四电阻R4的一端连接,第一运算放大器A1的输出端分别与第四电阻R4的另一端和处理子电路3022连接。第一电阻R1的一端与检测节点E连接,第一电阻R1的另一端与参考电源端连接。第二电阻R2的另一端与检测节点E连接,第三电阻R3的另一端与参考电源端连接。该参考电源端为接地端。
在一些实施例中,第一运算放大器A1还可以包括两个电源端,其中一个电源端与电源端VCC连接,另一个电源端可以与参考电源端连接。
激光光源20所包括的激光器的较大的驱动电流通过第一电阻R1后产生压降,该第一电阻R1一端(即检测节点E)的电压Vi通过第二电阻R2传输至第一运算放大器A1的同相输入端,经过第一运算放大器A1放大N倍后输出。该N为该第一运算放大器A1的放大倍数,且N为正数。该放大倍数率N可以使得第一运算放大器A1输出的电压Vfb的数值为激光器的驱动电流的数值的整数倍。示例的,电压Vfb的数值可以与该驱动电流的数值相等,从而便于显示控制电路10将该放大后的驱动电压确定为激光器的驱动电流。
在一些实施例中,显示控制电路10、驱动电路301、开关电路302和放大电路303形成闭环,以实现对该激光器的驱动电流的反馈调节,从而使得该显示控制电路10可以通过激光器的第二亮度值与第一亮度值的差值,及时调节该激光器的驱动电流,也即是及时调节该激光器的实际发光亮度,避免激光器长时间发生COD故障,同时提高了对激光器发光控制的准确度。
需要说明的是,参考图3和图4,若激光光源20包括一个蓝色激光器201、一个红色激光器202和一个绿色激光器203。该蓝色激光器201可以设置在L1位置处,该红色激光器202可以设置在L2位置处,绿色激光器203可以设置在L3位置处。
参考图3和图4,L1位置处的激光经过第四二向色片604一次透射,再经过第五二向色片605反射一次后进入第一透镜901中。该L1位置处的光效率P1=Pt×t=。其中,Pt表示的是二向色片的透射率,Pf表示的是二向色片或者第五反射率904的反射率。
在一些实施例中,在L1、L2和L3三个位置中,L3位置处的激光的光效率最高,L1位置处的激光的光效率最低。由于蓝色激光器201输出的最大光功率Pb=4.5瓦(W),红色激光器202输出的最大光功率Pr=2.5W,绿色激光器203输出的最大光功率Pg=1.5W。即蓝色激光器201输出的最大光功率最大,红色激光器202输出的最大光功率次之,绿色激光器203输出的最大光功率最小。因此将绿色激光器203设置在L3位置处,将红色激光器202设置在L2位置处,将蓝色激光器201设置在L1位置处。也即是将绿色激光器203设置在光效率最高的光路中,从而确保投影设备能够获得最高的光效率。
在一些实施例中,显示控制电路10,还用于当激光器的第二亮度值与激光器的第一亮度值的差值小于等于差值阈值时,恢复与激光器对应的激光器驱动组件的电流控制信号至初始值,该初始值为正常状态下对激光器的PWM电流控制信号的大小。从而,当激光器发生COD故障时,可以快速的识别,并及时采取降低驱动电流的措施,减轻激光器自身的持续损伤,帮助其自恢复,整个过程中不需要拆机和人为干涉,提高了激光器光源使用的可靠性,保证了激光投影设备的投影显示质量。
本申请实施例可以应用于各种类型的投影设备。本申请实施方式的投影设备是一种可以将图像、或视频投射到屏幕上的设备,投影设备可以通过不同的接口同计算机、广电网络、互联网、视频高密光盘(VCD:Video Compact Disc)、数字化视频光盘(DVD:Digital Versatile Disc Recordable)、游戏机、DV等相连接播放相应的视频信号。投影设备广泛应用于家庭、办公室、学校和娱乐场所等。
图6A为本申请一实施例投影设备对应幕布的示意图。
在一些实施例中,投影幕布是用在电影、办公、家庭影院、大型会议等场合上的,用来显示图像、 视频文件的工具,可根据实际需要设置为不同的规格尺寸;在一些实施例中,为了显示效果更符合用户观影习惯,投影设备对应幕布的高宽比通常被设置为16:9,激光器组件可将播放内容投影至投影设备对应的幕布,如图6A所示,
大部分屏幕看起来都是白色,但其实不同的幕料对于光线的反射特性不尽相同,不同颜色光线的反射率也不同,常见的幕布包括漫反射屏幕和回归型屏幕。
白塑幕是典型的漫散射屏幕,漫散射屏幕是将投影设备的入射光向各个方向均匀地散射,在每一个角度都能看到同样的图像;漫反射屏幕具有超宽广的视听范围和柔和的图像,但要注意外部光线,杂乱光线的影响;在有外部光线,杂乱光线的环境下,外部光线,杂乱光线和映像的光线一起进行散射、反射、和映像光重叠因而引起图像质量低下;通常的漫散射屏幕在没有外部光线,杂乱光线的专用放映间使用时能最大限度地发挥它的性能。
需要说明的是,墙壁也具有漫散射的特性,但由于未经颜色校正及无吸光等处理,用作屏幕时所显示的图像会出现颜色失准、色散、暗部会出现虚光、亮度和对比度不够等现象;因此墙壁作为屏幕并不是一个好的选择。
玻珠幕是典型的回归型屏幕,由于幕面上的玻璃珠会以投影光的入射方向为中心将光线反射回去,所以在通常视听位置就可以看到明亮鲜艳的图像;由于一定程度上抑制了屏幕的漫散射,在屏幕正面附近看到的图像和在角度较大的位置上看到的图像的亮度会不同;在屏幕正面附近,可以看到亮度,对比度,层次都很好的图像;在有外部光线,杂乱光线的环境下,由于银幕是沿着外部光线,杂乱光线的入射方向将其反射回去,所以投影设备的图像光线和外部光线,杂乱光线很少重叠,从而可以得到鲜艳的图像。
可以理解,在用户人数较多、或横向观看的场合宜选择宽视角、低增益的屏幕;在狭长空间观看宜选用窄视角、高增益屏幕;选用有适当增益的屏幕,有助于提高屏幕的对比度,使图像灰度增加、色彩鲜明,增加图像的可观性;有良好的遮光及吸光的场所可使用漫反射和回归型屏幕,在家庭客厅可选择回归型屏幕;投影机桌面摆放可选择任何屏幕,而投影机吊装可选择漫反射或半值角较大的屏幕。
在一些实施例中,投影设备对应的幕布其四周边缘位置具有深色边缘线,因为该边缘线通常具有一定宽度,所以该深色边缘线也可称为边缘带,本申请提供的投影设备、及自动投入幕布区域的投影显示控制方法,将利用幕布的边缘带特点,稳定、高效的在环境中准确识别幕布,实现投影设备位置移动后快速自动入幕,所述幕布如图6A所示。
在一些实施例中,本申请提供的投影设备配置有摄像头,用户移动投影设备后,为了激光器组件能够将播放内容再次准确投影至幕布的投影区域,控制器将控制摄像头对投影设备所在环境进行拍摄以获取第一图像,通过对第一图像的分析处理,可确定幕布投影区域所在位置,即控制器控制投影设备摄像头获取投影设备对应幕布所在区域的第一图像,可减少后续算法识别幕布的计算量。
第一图像会包含各种各样的环境物体要,如幕布、电视柜、墙壁、天花板、茶几等,其找那个本申请提供的投影设备对应幕布包含深色的幕布边缘待,如图6B所示;
投影设备的控制器根据对第一图像进行分析处理,通过算法在第一图像所包含的上述环境因素将幕布识别出来,并控制激光器组件的投影方位,将播放内容准确的投影至上述幕布的投影区域。
在一些实施例中,为了更加容易、准确的在第一图像中识别环境因素中的幕布投影区域,控制器将基于获取环境图像时刻的第一图像灰阶图亮度分析,获取最合适的二值化阈值,以实现第一图像二值化的得到对应的第二图像;通过获取上述最合适的二值化阈值,可实现第二图像中各个环境要素轮廓提取尽可能的清晰保留,方便后续算法中闭合轮廓的提取。
首先,控制器基于已获取的第一图像生成其对应的灰阶分布,即灰阶图,如图6C所示右侧图像为 第一图像对应的灰阶图;
然后,控制器将确定第一图像灰阶图中最大亮度占比对应的灰阶值,该255灰阶图可反应第一图像中最亮部分区域在整幅图像中的占比,例如图6C中灰阶图最亮部分假定在130。
最终,控制器以上述获取的灰阶值为中心,在该会灰阶值上下预设范围内选取预设个数的灰阶值作为阈值,反复对第一图像进行二值化,直至在第二图像对幕布典型特质提取达到预设条件,得到第二图像,可以理解,得到第二图像时的灰阶值即应选取的二值化阈值。
例如,在图6C中黑色区域固定情况下,第一图像的二值化阈值的起始点可以暂定为130;然后控制器例依次选取120、122、124、126、128、130、132、134、136、138、140为二值化阈值分别对第一图像进行二值化得到多个二值化图像;然后对上述已获取的多个二值化图像进行分析,将其中包含幕布特征的二值化图像认定为第二图像;所述幕布特征即深色幕布边缘带与白色幕布投影区域的组合,在二值化图像中具体表现为一级闭合轮廓内包含二级闭合轮廓,所述二值化图像如图6D所示。
在一些实施例中,针对第一图像的二值化过程中,该二值化阈值可选择为固定值;但是,对于投影设备摄像头拍照提取幕布区域这一场景有时效果不佳,因为拍摄环境对于最终的画面成像影响巨大,若在夜间选择较高阈值,可能导致大部分区域均被二值化为边缘。
可以理解,第一图像二值化得到第二图像,最精确的方法是遍历所有阈值,即阈值遍历0-255,将所获取二值化图像进行边缘图像分析以获取边缘图像效果最好的二值化图像;但是,完全遍历方式的二值化会导致运算量变大;因此,本申请提供的自动投入幕布区域的投影显示控制方法采用灰阶图亮度分析,创建阈值优选区间,并在该阈值优选区间内做遍历检测,以获取最优的二值化图像。
在一些实施例中,投影设备控制器完成第一图像的二值化后,控制器将识别提取第二图像中包含的闭合轮廓,当闭合轮廓中还包含次级闭合轮廓时,可以暂时认定该闭合轮廓组合一定程度上与幕布的颜色特征相吻合。
可以理解,对应于幕布的颜色、结构特征,在闭合轮廓的识别过程中,幕布边缘带的外边缘线条将被识别为范围较大的闭合轮廓,也可以称为一级闭合轮廓;幕布边缘带的内边缘线条将被识别为范围较小的闭合轮廓,也可以称为二级闭合轮廓,即控制器将确定第二图像所包含的一级闭合轮廓,并且该一级闭合轮廓内含有二级闭合轮廓。
例如,在包含多个环境因素图像的第二图像中,控制器对图像中的各个环境要素图像对应的闭合轮廓进行分析识别,寻找具备上述层级关系的闭合轮廓作为候选幕布投影区域,即控制器将首先获取第二图像中包含二级闭合轮廓的所有一级闭合轮廓,将不含有次级闭合轮廓的单层级闭合轮廓剔除。
其中,一级闭合轮廓也可称为父闭合轮廓,其包含的二级闭合轮廓也可称为子闭合轮廓,即一级闭合轮廓与二级闭合轮廓具有父子关系;可以理解,在第二图像中识别有多个包含子闭合轮廓的父闭合轮廓中,必然有一个闭合轮廓对应于幕布,即仅包含父子两层闭合轮廓的区域才可能做为幕布投影区域的候选区域。
例如,图6E所示的闭合轮廓示意图中,总计识别A、B、C、D四个闭合轮廓;其中,A为一级闭合轮廓,即父闭合轮廓;B、C、D均为A轮廓包含的二级闭合轮廓,即子闭合轮廓;控制器将B、C、D二级闭合轮廓作为幕布投影区域的候选区,继续进行算法识别。
在一些实施例中,控制器在上述候选区中,对已获取的二级闭合轮廓进行凸四边形识别判定,如果该二级闭合轮廓为凸四边形,控制器将认定该二级闭合轮廓为幕布对应的投影区域,即幕布深色边缘带所包围的投影区域,控制将控制投影设备的激光器组件投影至该二级闭合轮廓,以使得播放内容准确投影、覆盖幕布对应的投影区域。
首先,摄像头在获取投影设备所在环境的第一图像后,为了更准确的提取图像中的闭合轮廓,控制 器对第一图像进行二值化得到第二图像,然后基于第二图像提取其中包含的各种闭合轮廓,为了更好的反映闭合轮廓对应的环境要素。
可以发现,第一图像客厅中的家具、家电、物品及墙壁等环境要素,只要其颜色与周围环境存在深色、浅色对比关系,在二值化图像中可以被很好的识别,例如在图中标识的幕布边缘带区域、幕布边缘待内部白色幕布区域、电视柜区域、沙发、茶几等都可以被控制器准确识别。
然后,控制器将对已获取的符合上述层级关系区域中的二级闭合轮廓进行多边形拟合,因为幕布投影区域为标准的矩形,因此控制器将其中拟合结果为四边形轮廓的二级闭合轮廓判定为幕布投影区域的候选区;这样将提出闭合轮廓为三角形、圆形、五边形、或其他不规则闭合轮廓,以便于后续算法继续识别幕布投影区域对应的矩形闭合轮廓。
在识别到多个四边形二级闭合轮廓后,控制器将判定多个候选区的凹凸性,即判定多个四边形二级闭合轮廓的凹凸性,并且将结果为凸四边形的候选区判定为幕布对应的投影区域。
其中,凹四边形中某些边向两方延长,其他各边有不在延长所得直线的同一旁,如图6G左侧图所示;凸四边形中把任何一边向两方延长,其他各边都在延长所得直线的同一旁,如图6G右侧图所示;
凹四边形区别于凸四边形,有且仅有一个角大于180°,但小于360°;其余三个角中,与最大角相邻的两个角一定是锐角;最大角的对角可以是锐角,直角或钝角;最大角上边的图形外的角等于其他三个内角之和;凸四边形就是没有角度数大于180就的四边形,并且任意一边所在直线不经过其他的线段,即其他三边在第四边所在直线的一边,任意三边之和大于第四边。例如,假设已识别的二级闭合轮廓如图6E中所示,控制器将对B、C、D三个二级闭合轮廓进行多边形拟合。
其中,B被识别为四边形闭合轮廓,C被识别为五边形闭合轮廓,D被识别为近圆形闭合轮廓;因此,B四边形闭合轮廓将继续被保留为幕布投影区域对应的二级闭合轮廓候选区,C、D闭合轮廓将在候选区中剔除。
针对幕布的结构形状特征,控制器对已获取的B四边形闭合轮廓进行凹凸性判定;因为幕布实际只可能为凸四边形,因此控制器可通过凹凸四边形判断算法进行凹凸性判定,该算法例如可实施为:
对于凸四边形,任意不相邻两点连线后得到两个三角形,其原四边形面积应与所述两个三角形面积之和相同;对于凸四边形,仅凹点对应连线后所得到的两个三角形,其原四边形面积才会等于上述两个三角形面积之和。通过该凸四边形判定算法的实施,可以将一级闭合轮廓中包含的其他不相干环境因素对应的图像剔除,以准确获取幕布投影区域对应的凸四边形二级闭合轮廓。
可以理解,通过以上算法步骤,可提高投影设备对幕布投影区域识别的准确率,并且无论幕布是否为纯色均可识别;在投影设备播放任何画面时,均可进行幕布投影区域的提取。
即当控制器识别的二级闭合轮廓中还包含三级闭合轮廓时,该三级闭合轮廓对应于播放内容生成的图像,控制器检测到二级闭合轮廓中包含三级闭合轮廓时,控制器将不会对三级闭合轮廓进行提取、分析,这样可以保证在投影设备工作时移动该投影设备,仍然可实现投影设备的自动入幕,其投影内容准确投入幕布投影区域。
可以理解,在上述识别幕布投影区域的算法实现中,幕布深色边缘带对应于控制器识别的一级闭合轮廓,该深色边缘带内部的白色幕布区域对应于控制器识别的凸四边形二级闭合轮廓。
在一些实施例中,控制器将第一图像中面积最大的闭合轮廓作为幕布投影区域的识别条件。
投影设备在获取所在环境的第一图像后,可通过选取固定的二值化阈值,例如按照20%亮度进行第一图像二值化获取第二图像,可能会发生幕布某个区域、或右下角区域并未形成闭合轮廓,从而导致后续闭合轮廓提取出现明显错误;然后在所有闭合轮廓中查找面积最大的闭合轮廓,并判定该闭合轮廓内部是否颜色一致;如果判定某个闭合轮廓内部颜色一致且面积最大,则认定该闭合轮廓为幕布投影区 域,如图6F所示。
然而,即时投影设备能够提取准确的闭合轮廓,但是拍摄画面第一图像中存在大面积纯色闭合轮廓区域时,该算法获取的最终结果可能会产生偏差,如图6F中所示的大面积纯色墙壁区域。
基于上文显示设备实现将播放内容自动投入幕布投影区域的显示控制方案及相关附图的介绍,本申请还提供了一种自动投入幕布区域的投影显示控制方法,所述方法包括:基于获取时刻的第一图像灰阶图亮度分析,将第一图像二值化得到第二图像,所述第一图像为环境图像;确定所述第二图像所包含的一级闭合轮廓,所述一级闭合轮廓内含有二级闭合轮廓;在判定二级闭合轮廓为凸四边形时,投影至所述二级闭合轮廓以覆盖所述幕布对应的投影区域;其中,所述幕布包含对应于一级闭合轮廓的幕布边缘带,所述投影区域被所述幕布边缘带围绕,所述幕布用于显示播放内容的投影。
在一些实施例中,基于获取时刻的第一图像灰阶图亮度分析将第一图像二值化得到第二图像,具体包括:确定第一图像灰阶图中最大亮度占比对应的灰阶值;以所述灰阶值为中心选取预设个数的灰阶值作为阈值重复二值化所述第一图像,将幕布典型特质提取达到预设条件的二值化图像作为第二图像。
在一些实施例中,在第二图像中确定包含二级闭合轮廓为幕布对应的投影区域,具体包括:获取第二图像中包含二级闭合轮廓的所有一级闭合轮廓;对所述二级闭合轮廓进行多边形拟合,将拟合结果为四边形轮廓的二级闭合轮廓判定为幕布投影区域候选区;判定所述幕布投影区域候选区的凹凸性,将结果为凸四边形的候选区判定为所述幕布对应的投影区域。
在一些实施例中,在第二图像中确定包含二级闭合轮廓为幕布对应的投影区域过程中,所述方法还包括:在所述二级闭合轮廓中还可包括播放内容生成的三级闭合轮廓时,对所述三级闭合轮廓不进行提取分析。
在一些实施例中,获取所述投影设备所在环境的第一图像,具体包括所述控制器获取幕布所在区域的第一图像。
所述方法已经在上文进行详细阐述,在此不再赘述。
在一些实施例中,投影设备发出的激光通过数字微镜器件(DMD:Digital Micromirror Device)芯片的纳米级镜片反射,其中光学镜头也是精密元件,当像面、以及物面不平行时,会使得投影到屏幕的图像发生几何形状畸变。
图7A为本申请一实施例投影设备实现显示控制的系统框架示意图。
在一些实施例中,本申请提供的投影设备具备长焦微投的特点,投影设备包括控制器,所述控制器通过预设算法可对光机画面进行显示控制,以实现显示画面自动梯形校正、自动入幕、自动避障、自动对焦、以及防射眼等功能。
可以理解,投影设备通过本申请提供的基于几何校正的显示控制方法,可实现长焦微投场景下的灵活位置移动;并且设备在每次移动过程中,针对可能出现的投影画面失真、投影面异物遮挡、投影画面从幕布异常等问题,控制器可控制投影设备实现自动显示校正功能,使其自动恢复正常显示。
在一些实施例中,本申请提供的基于几何校正的显示控制系统,包括应用程序服务层(APK Service:Android application package Service)、服务层、以及底层算法库。
应用程序服务层用于实现投影设备和用户之间的交互;基于用户界面的显示,用户可对投影设备的各项参数以及显示画面进行配置,控制器通过协调、调用各种功能对应的算法服务,可实现投影设备在显示异常时自动校正其显示画面的功能。
服务层可包括校正服务、摄像头服务、飞行时间(TOF:Time of Flight)服务等内容,所述服务向上可对焦应用程序服务层(APK Service),实现投影设备不同服务配置的对应特定功能;服务层向下对接算法库、相机、飞行时间传感器等数据采集业务,实现封装底层复杂逻辑、并将业务数据传送至对 应服务层的功能。
底层算法库可提供校正服务、和投影设备实现各种功能的控制算法,所述算法库例如可基于OpenCV完成各种数学运算,以实现为校正服务提供基础能力。OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉和机器学习软件库,可以运行在Linux、Windows、Android和Mac OS等操作系统上。
在一些实施例中,投影设备还配置有陀螺仪传感器;投影设备移动过程中,陀螺仪传感器可感知位置移动、并主动采集移动数据,然后通过系统框架层将以采集数据发送至应用程序服务层,以支撑用户界面交互、应用程序交互过程中所需应用数据,所述采集数据还可用于控制器在算法服务实现中的数据调用。
在一些实施例中,投影设备配置有飞行时间(TOF:Time of Flight)传感器,在所述飞行时间传感器采集到相应数据后,所述数据将被发送至服务层对应的飞行时间服务;
飞行时间服务继续将所述飞行时间传感器采集的数据通过进程通信框架(HSP Core)发送至投影设备的应用程序服务层,所述数据将用于控制器的数据调用、和用户界面、程序应用交互使用。
在一些实施例中,投影设备配置有用于采集图像的相机,所述相机例如可实施为双目相机、或深度相机等;其采集的数据将发送至摄像头服务,然后由摄像头服务将双目相机采集的图像数据发送至进程通信框架(HSP Core)和/或投影设备校正服务,用于投影设备功能的实现。
在一些实施例中,投影设备校正服务可接收摄像头服务发送的相机采集数据,控制器针对所需实现的不同功能可在算法库中调用各自对应控制算法。
在一些实施例中,可通过进程通信框架与应用程序服务进行数据交互,然后通过进程通信框架将计算结果反馈至校正服务,校正服务将获取的计算结果发送至投影设备操作系统,以生成对应控制信令,并将所述控制信令发送至光机控制驱动,控制光机工作状况、实现显示效果自动校正。
图7B为本申请另一实施例投影设备实现放射眼功能的信令交互时序示意图。
在一些实施例中,本申请提供的投影设备可实现防射眼功能。为防止用户偶然进入投影设备射出激光轨迹范围内而导致的视力损害危险,在用户进入投影设备所在的预设特定非安全区域时,控制器可控制用户界面显示对应的提示信息,以提醒用户离开当前区域,控制器还可控制用户界面降低显示亮度,以防止激光对用户视力造成伤害。
在一些实施例中,投影设备被配置为儿童观影模式时,控制器将自动开启防射眼开关。
在一些实施例中,控制器接收到陀螺仪传感器发送的位置移动数据后、或接收到其它传感器所采集的异物入侵数据后,控制器将控制投影设备开启防射眼开关。
在一些实施例中,在飞行时间(TOF)传感器、摄像头设备等设备所采集数据触发预设的任一阈值条件时,控制器将控制用户界面降低显示亮度、显示提示信息、降低光机发射功率、亮度、强度,以实现对用户视力的保护。
在一些实施例中,投影设备控制器可控制校正服务向飞行时间传感器发送信令,以查询投影设备当前设备状态,然后控制器接受来自飞行时间传感器的数据反馈。
校正服务可向进程通信框架(HSP Core)发送通知算法服务启动防射眼流程信令;进程通信框架(HSP Core)将从算法库进行服务能力调用,以调取对应算法服务,例如可包括拍照检测算法、截图画面算法、以及异物检测算法等;
进程通信框架(HSP Core)基于上述算法服务返回异物检测结果至校正服务;针对返回结果,若达到预设阈值条件,控制器将控制用户界面显示提示信息、降低显示亮度,其信令时序如图7B所示。
在一些实施例中,投影设备防射眼开关在开启状态下,用户进入预设的特定区域时,投影设备将自 动降低光机发出激光强度、降低用户界面显示亮度、显示安全提示信息。投影设备对上述防射眼功能的控制,可通过以下方法实现:
控制器基于相机获取的投影画面,利用边缘检测算法识别投影设备的投影区域;在投影区域显示为矩形、或类矩形时,控制器通过预设算法获取上述矩形投影区域四个顶点的坐标值;
在实现对于投影区域内的异物检测时,可使用透视变换方法校正投影区域为矩形,计算矩形和投影截图的差值,以实现判断显示区域内是否有异物;若判断结果为存在异物,投影设备自动触发防射眼功能启动。
在实现对投影区域外一定区域的异物检测时,可将当前帧的相机内容、和上一帧的相机内容做差值,以判断投影区域外区域是否有异物进入;若判断有异物进入,投影设备自动触发防射眼功能。
于此同时,投影设备还可利用飞行时间(ToF)相机、或飞行时间传感器检测特定区域的实时深度变化;若深度值变化超过预设阈值,投影设备将自动触发防射眼功能。
在一些实施例中,投影设备基于采集的飞行时间数据、截图数据、以及相机数据分析判断是否需要开启防射眼功能。
例如,根据采集的飞行时间数据,控制器做深度差值分析;如果深度差值大于预设阈值X是,当预设阈值X实施为0时,则可判定有异物已处于投影设备的特定区域。若用户位于所述特定区域,其视力存在被激光损害风险,投影设备将自动启动防射眼功能,以降低光机发出激光强度、降低用户界面显示亮度、并显示安全提示信息。
又例如,投影设备根据已采集截图数据做加色模式(RGB)差值分析,如所述色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域;所述特定区域内若存在用户,其视力存在被激光损害风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息。
又例如,投影设备根据已采集相机数据获取投影坐标,然后根据所述投影坐标确定投影设备的投影区域,进一步在投影区域内进行加色模式(RGB)差值分析,如果色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域,所述特定区域内若存在用户,其视力存在被激光损害的风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息。
若获取的投影坐标处于扩展区域,控制器仍可在所述扩展区域进行加色模式(RGB)差值分析;如果色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域,所述特定区域内若存在用户,其视力存在被投影设备发出激光损害的风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息,如图7G所示。
图7C为本申请另一实施例投影设备实现显示画面校正功能的信令交互时序示意图。
在一些实施例中,通常情况下,投影设备可通过陀螺仪、或陀螺仪传感器对设备移动进行监测。校正服务向陀螺仪发出用于查询设备状态的信令,并接收陀螺仪反馈用于判定设备是否发生移动的信令。
在一些实施例中,投影设备的显示校正策略可配置为,在陀螺仪、飞行时间传感器同时发生变化时,投影设备优先触发梯形校正;在陀螺仪数据稳定预设时间长度后,控制器启动触发梯形校正;并且控制器还可将投影设备配置为在梯形校正进行时不响应遥控器按键发出的指令;为了配合梯形校正的实现,投影设备将打出纯白图卡。
其中,梯形校正算法可基于双目相机构建世界坐标系下的投影面与光机坐标系转换矩阵;进一步结合光机内参计算投影画面与播放图卡的单应性,并利用该单应性实现投影画面与播放图卡间的任意形状转换。
在一些实施例中,校正服务发送用于通知算法服务启动梯形校正流程的信令至进程通信框架(HSP CORE),所述进程通信框架进一步发送服务能力调用信令至算法服务,以获取能力对应的算法;
算法服务获取执行拍照和画面算法处理服务、避障算法服务,并将其以信令携带的方式发送至进程通信框架;在一些实施例中,进程通信框架执行上述算法,并将执行结果反馈给校正服务,所述执行结果可包括拍照成功、以及避障成功。
在一些实施例中,投影设备执行上述算法、或数据传送过程中,若出现错误校正服务将控制用户界面显示出错返回提示,并控制用户界面再次打出梯形校正、自动对焦图卡。
通过自动避障算法,投影设备可识别幕布;并利用投影变化,将投影画面校正至幕布内显示,实现与幕布边沿对齐的效果。
通过自动对焦算法,投影设备可利用飞行时间(ToF)传感器获取光机与投影面距离,基于所述距离在预设的映射表中查找最佳像距,并利用图像算法评价投影画面清晰程度,以此为依据实现微调像距。
在一些实施例中,校正服务发送至进程通信框架的自动梯形校正信令可包含其他功能配置指令,例如可包含是否实现同步避障、是否入幕等控制指令。
进程通信框架发送服务能力调用信令至算法服务,使算法服务获取执行自动对焦算法,实现调节设备与幕布之间的视距;在一些实施例中,在应用自动对焦算法实现对应功能后,算法服务还可获取执行自动入幕算法,所述过程中可包含梯形校正算法。
在一些实施例中,投影设备通过执行自动入幕,算法服务可设置投影设备与幕布之间的8位置坐标;然后再次通过自动对焦算法,实现投影设备与幕布的视距调节;最终,将校正结果反馈至校正服务,并控制用户界面显示校正结果,如图7C所示。
在一些实施例中,投影设备通过自动对焦算法,利用其配置的激光测距可获得当前物距,以计算初始焦距、及搜索范围;然后投影设备驱动相机(Camera)进行拍照,并利用对应算法进行清晰度评价。
投影设备在上述搜索范围内,基于搜索算法查找可能的最佳焦距,然后重复上述拍照、清晰度评价步骤,最终通过清晰度对比找到最优焦距,完成自动对焦。
例如,步骤7D01,投影设备启动;步骤7D02,用户移动设备,投影设备自动完成校正后重新对焦;步骤7D03,控制器将检测自动对焦功能是否开启;当自动对焦功能未开启时,控制器将结束自动对焦业务;步骤7D04,当自动对焦功能开启时,投影设备将通过中间件获取飞行时间(TOF)传感器的检测距离进行计算;
步骤7D05,控制器根据获取的距离查询预设的映射表,以获取投影设备的大致焦距;步骤7D06,中间件将获取焦距设置到投影设备的光机;
步骤7D07,光机以上述焦距进行发出激光后,摄像头将执行拍照指令;步骤7D08,控制器根据获取的拍照结果、评价函数,判断投影设备对焦是否完成;如果判定结果符合预设完成条件,则控制自动对焦流程结束;步骤7D09,如果判定结果不符合预设完成条件,中间件将微调投影设备光机的焦距参数,例如可以预设步长逐渐微调焦距,并将调整的焦距参数再次设置到光机;从而实现反复拍照、清晰度评价步骤,最终通过清晰度对比找到最优焦距完成自动对焦,如图7D所示。
在一些实施例中,本申请提供的投影设备可通过梯形校正算法实现显示校正功能。
首先基于标定算法,可获取两相机之间、相机与光机之间的两组外参,即旋转、平移矩阵;然后通过投影设备的光机播放特定棋盘格图卡,并计算投影棋盘格角点深度值,例如通过双目相机之间的平移关系、及相似三角形原理求解xyz坐标值;之后再基于所述xyz拟合出投影面、并求得其与相机坐标系的旋转关系与平移关系,具体可包括俯仰关系(Pitch)和偏航关系(Yaw)。
通过投影设备配置的陀螺仪可得到卷(Roll)参数值,以组合出完整旋转矩阵,最终计算求得世界 坐标系下投影面到光机坐标系的外参。
结合上述步骤中计算获取的相机与光机的R、T值,可以得出投影面世界坐标系与光机坐标系的转换关系;结合光机内参,可以组成投影面的点到光机图卡点的单应性矩阵。
最终在投影面选择矩形,利用单应性反求光机图卡对应的坐标,该坐标就是校正坐标,将其设置到光机,即可实现梯形校正。
例如,流程如图7E所示,
步骤7E01,投影设备控制器获取照片像素点对应点的深度值,或投影点在相机坐标系下的坐标;
步骤7E02,通过深度值,中间件获取光机坐标系与相机坐标系关系;
步骤7E03,控制器计算得到投影点在光机坐标系下的坐标值;
步骤7E04,基于坐标值拟合平面获取投影面与光机的夹角;
步骤7E05,根据夹角关系获取投影点在投影面的世界坐标系中的对应坐标;
步骤7E06,根据图卡在光机坐标系下的坐标与投影平面投影面对应点的坐标,可计算得到单应性矩阵。
步骤7E07,控制器基于上述已获取数据判定障碍物是否存在;
步骤7E08,障碍物存在时,在世界坐标系下的投影面上任取矩形坐标,根据单应性关系计算出光机要投射的区域;
步骤7E09,障碍物不存在时,控制器例如可获取二维码特征点;
步骤7E10,获取二维码在预制图卡的坐标;
步骤7E11,获取相机照片与图纸图卡单应性关系;
步骤7E12,将获取的障碍物坐标转换到图卡中,即可获取障碍物遮挡图卡坐标。
步骤7E13,依据障碍物图卡遮挡区域在光机坐标系下坐标,通过单应性矩阵转换得到投影面的遮挡区域坐标;
步骤7E14,在世界坐标系下投影面上任取矩形坐标,同时避开障碍物,根据单应性关系求出光机要投射的区域。
可以理解,避障算法在梯形校正算法流程选择矩形步骤时,利用算法(OpenCV)库完成异物轮廓提取,选择矩形时避开该障碍物,以实现投影避障功能。
在一些实施例中,如图7F所示,
步骤7F01,中间件获取相机拍到的二维码图卡;
步骤7F02,识别二维码特征点,获取在相机坐标系下的坐标;
步骤7F03,控制器进一步获取预置图卡在光机坐标系下的坐标;
步骤7F04,求解相机平面与光机平面的单应性关系;
步骤7F05,控制器基于上述单应性关系,识别相机拍到的幕布四个顶点坐标;
步骤7F06,根据单应性矩阵获取投影到幕布光机要投射图卡的范围。
可以理解,在一些实施例中,入幕算法基于算法库(OpenCV),可识别最大黑色闭合矩形轮廓并提取,判断是否为16:9尺寸;投影特定图卡并使用相机拍摄照片,提取照片中多个角点用于计算投影面(幕布)与光机播放图卡的单应性,将幕布四顶点通过单应性转换至光机像素坐标系,将光机图卡转换至幕布四顶点即可完成计算比对。
长焦微投投影设备具有灵活移动的特点,每次位移后投影画面可能会出现失真,另外如投影面存在异物遮挡、或投影画面从幕布异常时,本申请提供的投影设备、以及基于几何校正的显示控制方法,可针对上述问题自动完成校正,包括实现自动梯形校正、自动入幕、自动避障、自动对焦、防射眼等功能 的。
本申请实施例的有益效果在于,通过创建第一图像,可实现对投影设备对应幕布所处环境图像的获取;进一步通过创建第一图像灰阶图,可提高投影设备对环境要素对应闭合轮廓识别的准确率;进一步通过构建一级、二级闭合轮廓,可缩小幕布投影区域的候选区筛选范围;进一步通过判定二级闭合轮廓为凸四边形,可识别候选区中包含的幕布投影区域,提高识别幕布投影区域的准确率、避免用户手动微调投影角度、实现对配合幕布使用的投影设备在移动后其播放内容能够自动投影至幕布投影区域。
图8为在一些实施例中投影设备2的镜头结构示意图。为了支持投影设备2的自动调焦过程,如图8所示,投影设备2的镜头300还可以包括光学组件310和驱动马达320。其中,光学组件310是由一个或多个透镜组成的透镜组,可以对光机200发射的光线进行折射,使光机200发出的光线能够透射到投影面上,形成透射内容影像。
光学组件310可以包括镜筒以及设置在镜筒内的多个透镜。根据透镜位置是否能够移动,光学组件310中的透镜可以划分为移动镜片311和固定镜片312,通过改变移动镜片311的位置,调整移动镜片311和固定镜片312之间的距离,改变光学组件310整体焦距。因此,驱动马达320可以通过连接光学组件310中的移动镜片311,带动移动镜片311进行位置移动,实现自动调焦功能。
需要说明的是,本申请部分实施例中所述的调焦过程是指通过驱动马达320改变移动镜片311的位置,从而调整移动镜片311相对于固定镜片312之间的距离,即调整像面位置,因此光学组件310中镜片组合的成像原理,所述调整焦距实则为调整像距,但就光学组件310的整体结构而言,调整移动镜片311的位置等效于调节光学组件310的整体焦距调整。
当投影设备2与投影面之间相距不同距离时,需要投影设备2的镜头调整不同的焦距从而在投影面上透射清晰的图像。而在投影过程中,投影设备2与投影面的间隔距离会受用户的摆放位置的不同而需要不同的焦距。因此,为适应不同的使用场景,投影设备2需要调节光学组件310的焦距。
图9为在一些实施例中距离传感器和相机结构示意图。如图8所示,投影设备2还可以内置或外接相机700,相机700可以对投影设备2投射的画面进行图像拍摄,以获取投影内容图像。投影设备2再通过对投射内容图像进行清晰度检测,确定当前镜头焦距是否合适,并在不合适时进行焦距调整。基于相机700拍摄的投影内容图像进行自动调焦时,投影设备2可以通过不断调整镜头位置并拍照,并通过对比前后位置图片的清晰度找到调焦位置,从而将光学组件中的移动镜片311调整至合适的位置。例如,控制器500可以先控制驱动马达320将移动镜片311调焦起点位置逐渐移动至调焦终点位置,并在此期间不断通过相机700获取投影内容图像。再通过对多个投影内容图像进行清晰度检测,确定清晰度最高的位置,最后控制驱动马达320将移动镜片311从调焦终端调整到清晰度最高的位置,完成自动调焦。
在一些实施例中,图10为本申请实施例中投影设备进行避障投影的流程示意图。参见图10,投影设备2中的控制器被配置为:
S1、获取用户输入的投影指令;响应于投影指令,获取投影内容图像。对投影内容图像执行障碍物的轮廓检测,得到障碍物轮廓坐标集。
当投影设备2在投影面的投影区域中投射的投影图像之前,投影设备2可以自动对投影区域进行障碍物检测,并通过障碍物检测结果确定投影区域中没有障碍物后投射投影图像,从而实现自动避障功能。也就是说,如果投影区域中存在障碍物,投影设备2在执行避障过程之前的投影区域与完成避障过程的投影区域是不同的。具体可以设置为,投影设备2接收到投影指令,响应 于接收到的投影指令,开启自动避障功能。投影指令是指用于触发投影设备2自动进行避障过程的控制指令。
在一些实施例中,投影指令可以是用户主动输入的指令。例如,在接通投影设备2的电源后,投影设备2可以在投影面中的投影区域上投射出图像。此时,用户可以按下投影设备2中预先设置的自动避障开关,或投影设备2配套遥控器上的自动避障按键,使投影设备2开启自动避障功能,自动对投影区域进行障碍物检测。
在一些实施例中,控制器响应于投影指令,控制光机200投射白色图卡至投影面中的投影区域。在投射白色图卡之后,控制相机700拍摄投影面图像。由于相机700拍摄的投影面图像的图像区域大于投影区域的图像区域。因此,为了获取投影区域的图像即投影内容图像,控制器被配置为:获取基于相机700拍摄的投影面图像,计算得到投影区域的四个角点与四个边缘中点在光机200坐标系下的坐标值。并基于坐标值拟合平面获取投影面与光机200的夹角关系。根据夹角关系获取四个角点与四个边缘中点在投影面的世界坐标系中的对应坐标。获取白色图卡在光机坐标系下的坐标与投影面对应点的坐标,可计算得到单应性矩阵。最后,通过单应性矩阵将投影区域的四个角点与四个边缘中点在光机坐标系下的坐标值转换为对应在相机坐标系下的坐标值。以根据四个角点与四个边缘中点在相机坐标系下的坐标值确定投影区域在投影面图像中的位置和区域面积。
在一些实施例中,控制器在对投影内容图像执行障碍物的轮廓检测的过程中,基于投影内容图像采用图像轮廓检测算法得到多轮廓区域信息。其中,多轮廓区域信息包括障碍物轮廓坐标集。障碍物轮廓坐标集用于表征多个障碍物轮廓坐标构成的集合。并根据障碍物轮廓坐标集获取障碍物集合,障碍物集合包括至少一个障碍物以及对应的轮廓层级;轮廓层级用于表征障碍物之间的外包或内嵌关系。需要说明的是,在执行障碍物的轮廓检测之前,控制器需将投影面图像的四边坐标剔除,以防止投影面图像的四边坐标对轮廓检测产生影响。
在一些实施例中,障碍物对应的轮廓层级可以用轮廓参数表示。例如,轮廓参数包括后一个轮廓、前一个轮廓、父轮廓、子轮廓的索引编号。如果障碍物的轮廓参数中没有对应的索引编号,则将索引编号赋值为负数(如用-1表示)。
下面示例性对轮廓参数进行解释。
如果轮廓A包含轮廓B、轮廓C、轮廓D,则轮廓A即为父轮廓;轮廓B、轮廓C、轮廓D均为轮廓A的子轮廓。如果轮廓C的轮廓位置在轮廓B的顶部位置,则轮廓C为轮廓B的前一轮廓,同理,轮廓B为轮廓C的后一轮廓。
图11为本申请实施例中障碍物集合以及轮廓层级的示意图。参见图11,示例性的,障碍物集合包括轮廓contour1、轮廓contour2、轮廓contour2a、轮廓contour3和轮廓contour4五个闭合轮廓。其中,轮廓contour1、轮廓contour2是最外层轮廓,即为同一等级关系,设为0级。轮廓contour2a是轮廓contour2的子轮廓,即轮廓contour2a为一个等级,设为1级。轮廓contour3和轮廓contour4是轮廓contour2a的子轮廓,即轮廓contour3、轮廓contour4处于一个等级,设为2级。因此,对于轮廓contour2的轮廓参数表征为[-1,1,2a,-1]。
在一些实施例中,控制器被配置为:根据轮廓层级对障碍物集合进行筛选,得到第一目标集合;其中,第一目标集合包括至少一个轮廓层级为最外层的障碍物。也就是说,如果多个障碍物之间的轮廓关系存在外包或内嵌关系,只需提取出最外层轮廓对应的障碍物即可。目的是在实现避障功能的过程中,如果规避了最外层轮廓对应的障碍物,即使存在相对于最外层轮廓对应内嵌轮廓的障碍物也会同样被规避。示例性的,继续参见图11,从轮廓contour1、轮廓contour2、轮 廓contour2a、轮廓contour3和轮廓contour4五个闭合轮廓中筛选等级为0的轮廓,即为最外层轮廓。进而,根据最外层轮廓生成第一目标集合。其中,第一目标集合包括轮廓1和轮廓2。
在一些实施例中,参见图13,控制器被配置为:根据投影内容图像的图像面积对第一目标集合进行更新,得到第二目标集合,以根据第二目标集合确定非障碍物区域。在根据投影内容图像的图像面积更新第一目标集合的步骤中,具体设置为:控制器获取第一目标集合中每个障碍物对应的中心坐标、宽度和高度。根据中心坐标、宽度和高度计算得到障碍物对应的障碍物面积。计算障碍物面积与图像面积的第一比值。如果第一比值小于第一比例阈值,则删除第一目标集合中障碍物,以根据删除后的第一目标集合生成第二目标集合。
示例性的,第一目标集合包括轮廓contour1和轮廓contour2。根据轮廓contour1和轮廓contour2对应的中心坐标、宽度和高度计算得到轮廓1对应的轮廓contour1区域面积以及轮廓contour2对应的轮廓contour2区域面积。例如:轮廓contour1区域面积占用5个像素点,轮廓contour2区域面积占用30个像素点,图像面积占用100个像素点,第一比例阈值为四分之一。进而,轮廓contour1对应的第一比值为0.05,轮廓2对应的第一比值为0.3。可见,轮廓contour1对应的区域面积相对于图像面积较小。此时将第一目标集合中的轮廓contour1删除,以完成对第一目标集合的更新。
在一些实施例中,在对投影内容图像执行障碍物的轮廓检测的步骤之前,控制器被配置为:将投影内容图像进行灰度处理,得到灰度图像。利用边缘检测算法提取灰度图像中的边缘图像。对边缘图像进行去除噪声处理,得到去除噪声后的图像。利用阈值二值化算法分割去除噪声后的图像,得到前景图像和背景图像,以根据前景图像和背景图像执行障碍物的轮廓检测。
以下对上述去除噪声处理进行具体解释。
在去除噪声处理的过程中,首先对边缘图像进行膨胀算法操作。即依次读取边缘图像中的像素点坐标并设置结构元素和卷积核阈值,其中,结构元素为3×3的结构元素如卷积核。将全部像素点坐标与卷积核进行卷积计算,得到第一卷积结果。如果第一卷积结果大于卷积阈值,则将该像素点设置为1,反之为0。这样,在使用该卷积核依次遍历图像中的像素点时,若卷积核中出现数值为1时,即将边缘图像中对应的卷积核原点位置的像素点赋值为1,反之为0。因此,通过膨胀算法可以将纤细的图像边缘部分完成闭合。
需要说明的是,结构元素可以是3×3、5×5等不同尺寸比例的结构图。本申请仅以3×3的结构图,以及将像素点赋值为0或1作为示例。可根据具体的计算逻辑和算法参数自行设置结构元素以及对像素点赋值。
其次,对膨胀后图像进行腐蚀算法操作。具体为:依次读取膨胀后图像中的像素点坐标并与卷积核进行卷积计算,得到第二卷积结果。当第二卷积结果中的像素点均为1时,则该膨胀后图像中的对应像素点为1,反之为0。进而完成去除膨胀后图像中的噪声污点。同时,在纤细边缘图像中的边缘坐标点处可以分离物体,以及平滑较大物体的边界的同时不会明细改变较大物体的面积。
为了提取前景图像和背景图像,在一些实施例中,本申请利用阈值二值化算法分割去除噪声后的图像。具体为:控制器将去除噪声后的图像分割由相邻像素点所组合的多个图像区域。计算图像区域的像素值均值和方差。基于均值和方差确定图像区域中像素点的像素点阈值。遍历图像区域中的像素点。如果像素点的像素值大于像素点阈值,则基于像素点所在的区域生成前景图像。如果像素点的像素值小于像素点阈值,则基于像素点所在的区域生成背景图像。
示例性的,将去除噪声后的图像以R×R的图像块进行分割,得到m×n个图像块。其中,每个图像块对应一个图像区域。分别计算每个图像块中像素点的均值和方差,并以均值和方差作为 入参数据。遍历当前图像块的像素点,如果像素点大于像素点阈值,则将当前图像块设置为前景图像。如果像素点小于像素点阈值,则将当前图像块设置为背景图像。
为了提高障碍物的轮廓检测的检测精度,在一些实施方式中,在将边缘图像分割成前景图像和背景图像之后,控制器可以根据前景图像和背景图像执行障碍物的轮廓检测。或者控制器还可以仅根据前景图像执行障碍物的轮廓检测。再或者控制器还可以根据投影内容图像执行障碍物的轮廓检测。
S2、根据障碍物轮廓坐标集确定投影内容图像中的非障碍物区域。
为了确定投影内容图像中的非障碍物区域,其中,非障碍物区域为投影内容图像中除障碍物对应的区域之外的区域。在一些实施方式中,控制器获取第二目标集合中每个障碍物对应的轮廓坐标。在投影内容图像中移除轮廓坐标,以根据移除轮廓坐标后的投影内容图像确定非障碍物区域。这样,控制器通过获取全部障碍物对应的轮廓坐标,并在投影内容图像中移除全部障碍物对应的轮廓坐标,即可实现在投影内容图像中移除全部障碍物。进而,将移除全部障碍物后的投影内容图像对应的区域确定为非障碍物区域。通常,非障碍物区域为多边形区域。
S3、在非障碍物区域中提取预投射区域,预投射区域为在非障碍物区域内的矩形区域。并根据预投射区域和相机的拍摄参数计算投影面中的投影区域,以及控制光机将播放内容投射至投影区域。
在提取预投射区域的过程中,在一些实施方式中,图12为本申请实施例中矩形网格和非障碍物区域的示意图。参见图12,控制器获取投影图像的角点坐标,其中,角点坐标为投影图像四个顶点和/或四边中点对应的坐标。基于角点坐标构造矩形网格,矩形网格包括M×N个网格。接着,遍历全部网格,判断每个网格和非障碍物区域的包含关系。如果网格位于非障碍物区域中,则将该网格的网格标识赋值为1。如果网格不位于非障碍物区域中,则将该网格的网格标识赋值为0。这样,控制器可以在矩形网格中查找由网格标识为1的网格构成的矩形区域,并将矩形区域确定为预投射区域。进而,根据相机700的拍摄参数将投影图像中的预投射区域转换至投影面中的投影区域,并控制光机200将播放内容投射至投影区域中,实现自动避障功能。
为了使用户能看到更多的播放内容从而提高用户的使用体验,控制器在矩形网格中查找由网格标识为1的网格构成的矩形区域过程中,应查找由网格标识为1的网格构成的最大矩形区域,即获取非障碍物区域中最大的矩形区域。在一些实施方式中,遍历全部由网格标识为1的网格构成的矩形区域,得到每个矩形区域的像素点个数。提取像素点个数最多的矩形区域,并将该矩形区域确定为非障碍物区域中最大的矩形区域。
在一些实施例中,为了避免出现预投射区域的区域面积过小影响用户的观看体验,在获取非障碍物区域中的矩形区域的步骤之后,控制器计算矩形区域的区域面积与投影图像的图像面积的第二比值,并设定第二比例阈值。如果第二比值大于第二比例阈值,说明矩形区域的区域面积满足区域面积条件,则将矩形区域确定为预投射区域。
需要说明的是,为了确保非障碍物区域是否符合用户实际环境和用户视觉机制,控制器在确定预投射区域时,如果查找的最大的矩形区域数量为多个,则在多个最大的矩形区域中提取以投影图形的中心点为扩展基线的矩形区域,以根据提取的矩形区域计算第二比值。
在一些实施例中,如果第二比值小于第二比例阈值,即非障碍物区域中最大的矩形区域的区域面积相对于投影图像的图像面积较小。控制器则执行更新非障碍物区域的过程。并在更新后的非障碍物区域中再次提取预投射区域,以根据预投射区域确定投影面中的投影区域。
在更新非障碍物区域的过程中,参见图14,控制器被配置为:计算第二目标集合中障碍物的 质心坐标与投影内容图像的中心坐标之间的距离。提取第二目标集合中第一障碍物和第二障碍物;其中,第一障碍物为距离最小的障碍物,第二障碍物为障碍物面积最大的障碍物。计算第一障碍物的障碍物面积和第二障碍物的障碍物面积的第三比值。如果第三比值小于第三比例阈值,则删除第二目标集合中第一障碍物,以根据删除后的第二目标集合生成第三目标集合。进而,通过第三目标集合更新非障碍区域。其中,更新非障碍区域的步骤同上述基于第二目标集合确定非障碍区域的步骤,在此不再赘述。
在一些实施例中,在计算障碍物的中心坐标与投影内容图像的中心坐标之间的距离之前,控制器还被配置为:对第二目标集合中的障碍物进行面积排序,得到排序后的第二目标集合。例如,排序后的第二目标集合为[C 1,C 2,…,C n],即障碍物C 1面积最大,障碍物C n面积最小。
可以理解的是,更新非障碍物区域的过程即为更新第二目标集合的过程。为了对第二目标集合进行更新,控制器被配置为:基于第二目标集合中障碍物的中心坐标与投影内容图像的中心坐标之间的距离,计算第二目标集合中障碍物对应的可信度参数。以根据可信度参数更新第二目标集合。其中,可信度参数用于表征障碍物与投影图像中心之间的距离。示例性的,可信度参数的数值范围[0,1]。如果障碍物对应的可信度参数数值越大,即表示障碍物的可信度高以及与投影图像中心之间的距离越小。反之,如果障碍物对应的可信度参数数值越小,即表示障碍物的可信度低以及与投影图像中心之间的距离越大。
示例性的,在计算可信度参数的步骤中,控制器执行图像几何矩形算法得到第二目标集合中每个障碍物对应的轮廓质心。将每个障碍物对应的轮廓质心执行欧式距离计算即可得到障碍物对应的可信度参数。
在一些实施例中,在基于可信度参数更新第二目标集合的过程中,例如,排序后的第二目标集合为[C 1,C 2,…,C n],可信度参数最大的障碍物为障碍物C n-1。提取第二目标集合中可信度参数值最大的障碍物即障碍物C n-1和障碍物面积最大的障碍物即障碍物C 1。计算障碍物C n-1的障碍物面积和障碍物C 1的障碍物面积的第三比值,如果第三比值小于第三比例阈值,说明障碍物C n-1对应的区域面积相对于障碍物C 1的障碍物面积较小。进而,可以将障碍物C n-1对应的区域不作为障碍物处理。控制器在判定第三比值小于第三比例阈值后,删除第二目标集合中的障碍物C n-1,进而完成第二目标集合的更新。
需要说明的是,如果基于更新后的第二目标集合确定的非障碍物区域仍无法确定预投射区域,控制器则再次执行更新第二目标集合的过程,直至在非障碍物区域提取预投射区域。
在上述根据可信度参数选择障碍物的过程中,本申请仅以选取可信度参数最大的障碍物为示例,还可以选取可信度参数值在[0.5,1]范围内的障碍物。
由以上实施方式可知,本申请提供一种投影设备,投影设备中的控制器通过获取投影内容图像,对投影内容图像执行障碍物的轮廓检测,得到障碍物轮廓坐标集。根据障碍物轮廓坐标集确定投影内容图像中的非障碍物区域。在非障碍物区域中提取预投射区域,预投射区域为在非障碍物区域内的矩形区域。根据预投射区域和相机700的拍摄参数计算投影面中的投影区域,以及控制光机200将播放内容投射至投影区域。以解决投影设备出现障碍物检测失败或障碍物检测后投影区域面积小,降低用户使用体验的问题。
在一些实施例中,本申请提出一种避障投影方法,应用于投影设备,投影设备包括光机、相机以及控制器;所述避障投影方法包括:获取用户输入的投影指令;响应于投影指令,获取投影内容图像;对所述投影内容图像执行障碍物的轮廓检测,得到障碍物轮廓坐标集;根据所述障碍物轮廓坐标集确定所述投影内容图像中的非障碍物区域;所述非障碍物区域中提取预投射区域, 所述预投射区域为在所述非障碍物区域内的矩形区域;根据所述预投射区域和所述相机的拍摄参数计算所述投影面中的投影区域,以及控制所述光机将播放内容投射至所述投影区域。
在一些实施例中,方法还包括:根据所述障碍物轮廓坐标集获取障碍物集合,所述障碍物集合包括至少一个所述障碍物以及对应的轮廓层级;所述轮廓层级用于表征障碍物之间的外包或内嵌关系;根据所述轮廓层级对所述障碍物集合进行筛选,得到第一目标集合;其中,所述第一目标集合包括至少一个轮廓层级为最外层的障碍物;根据所述投影内容图像的图像面积对所述第一目标集合进行更新,得到第二目标集合,以根据第二目标集合确定所述非障碍物区域。
在一些实施例中,方法还包括:在根据所述投影内容图像的图像面积更新所述第一目标集合的步骤中,获取所述第一目标集合中每个障碍物对应的中心坐标、宽度和高度;据所述中心坐标、宽度和高度计算得到所述障碍物对应的障碍物面积;计算所述障碍物面积与所述图像面积的第一比值;如果所述第一比值小于第一比例阈值,则删除所述第一目标集合中所述障碍物,以根据删除后的所述第一目标集合生成所述第二目标集合。
在一些实施例中,方法还包括:在确定所述非障碍物区域的步骤中,获取所述第二目标集合中每个所述障碍物对应的轮廓坐标;在所述投影内容图像中移除所述轮廓坐标,以根据移除所述轮廓坐标后的所述投影内容图像确定所述非障碍物区域。
在一些实施例中,方法还包括:在所述非障碍物区域中提取预投射区域的步骤中,获取所述非障碍物区域中的矩形区域;遍历所述矩形区域内的像素点个数;提取像素点个数最多的所述矩形区域,以及计算所述矩形区域的区域面积与所述图像面积的第二比值;如果所述第二比值大于第二比例阈值,则将所述矩形区域确定为所述预投射区域。
在一些实施例中,方法还包括:计算所述第二目标集合中障碍物的质心坐标与所述投影内容图像的中心坐标之间的距离;提取所述第二目标集合中第一障碍物和第二障碍物;其中,所述第一障碍物为所述距离最小的障碍物,所述第二障碍物为所述障碍物面积最大的障碍物;计算所述第一障碍物的障碍物面积和所述第二障碍物的障碍物面积的第三比值;如果所述第三比值小于第三比例阈值,则删除所述第二目标集合中所述第一障碍物,以根据删除后的所述第二目标集合生成第三目标集合。
在一些实施例中,方法还包括:如果所述第二比值小于所述第二比例阈值,则根据所述第三目标集合更新所述非障碍物区域;在更新后的所述非障碍物区域中再次提取所述预投射区域,以根据所述预投射区域确定所述投影面中的所述投影区域。
在一些实施例中,方法还包括:在对所述投影内容图像执行障碍物的轮廓检测的步骤之前,将所述投影内容图像进行灰度处理,得到灰度图像;利用边缘检测算法提取所述灰度图像中的边缘图像;对所述边缘图像进行去除噪声处理,得到去除噪声后的图像;利用阈值二值化算法分割所述去除噪声后的图像,得到前景图像和背景图像,以根据所述前景图像和背景图像执行障碍物的轮廓检测。
在一些实施例中,方法还包括:在利用阈值二值化算法分割所述去除噪声后的图像的步骤中,将所述去除噪声后的图像分割由相邻像素点所组合的多个图像区域;计算所述图像区域的像素值均值和方差;基于所述均值和方差确定所述图像区域中所述像素点的像素点阈值;遍历所述图像区域中的像素点;如果所述像素点的像素值大于所述像素点阈值,则基于所述像素点所在的区域生成所述前景图像;如果所述像素点的像素值小于所述像素点阈值,则基于所述像素点所在的区域生成所述背景图像。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其 范围进行各种修改和改变。本申请的范围仅由所附的权利要求书来限制。

Claims (19)

  1. 一种投影设备,包括:
    投影组件,用于将播放内容投影至所述投影设备对应的幕布;
    摄像头,用于获取图像;
    控制器,被配置为:
    基于摄像头获取的第一图像灰阶图亮度分析,将第一图像二值化得到第二图像;
    确定所述第二图像所包含的一级闭合轮廓,所述一级闭合轮廓内含有二级闭合轮廓;
    在判定所述二级闭合轮廓为凸四边形时,控制所述投影组件将播放内容投影至所述二级闭合轮廓,所述二级闭合轮廓对应于所述幕布的投影区域;其中,所述幕布包含对应于所述一级闭合轮廓的幕布边缘带,所述投影区域被所述幕布边缘带围绕。
  2. 如权利要求1所述投影设备,所述控制器被配置为,所述控制器:
    确定所述第一图像灰阶图中最大亮度占比对应的灰阶值;
    以所述灰阶值为中心选取预设个数的灰阶值作为阈值,对所述第一图像进行二值化,将包含幕布特征的二值化图像作为所述第二图像。
  3. 如权利要求1所述投影设备,所述控制器被配置为,所述控制器:
    获取所述第二图像中包含所述二级闭合轮廓的所有一级闭合轮廓;
    对所述二级闭合轮廓进行多边形拟合,将拟合结果为四边形轮廓的二级闭合轮廓判定为幕布投影区域候选区;
    判定所述幕布投影区域候选区的凹凸性,将结果为凸四边形的所述候选区判定为所述幕布对应的投影区域。
  4. 如权利要求1所述投影设备,所述控制器被配置为,所述控制器:
    在所述二级闭合轮廓中还包括所述投影设备投影生成的三级闭合轮廓时,所述控制器不对所述三级闭合轮廓进行提取分析。
  5. 如权利要求1所述投影设备,所述控制器被配置为,所述控制器:
    控制所述摄像头获取所述投影设备对应幕布所在区域的第一图像。
  6. 如权利要求1所述的投影设备,所述控制器被配置为,
    基于相机获取的投影画面,利用边缘检测算法识别投影设备的投影区域;在识别出的投影区域显示为矩形、或类矩形时,控制器通过预设算法获取上述矩形投影区域四个顶点的坐标值。
  7. 如权利要求6所述投影设备,所述控制器还被配置为:
    使用透视变换方法校正投影区域为矩形,计算矩形和投影截图的差值,以实现判断显示区域内是否有异物。
  8. 如权利要求6所述投影设备,所述控制器还被配置为:
    在实现对投影区域外一定区域的异物检测时,可将当前帧的摄像头内容、和上一帧的摄像头内容做差值,以判断投影区域外区域是否有异物进入;若判断有异物进入,投影设备自动触发防射眼功能。
  9. 如权利要求6所述投影设备,所述控制器还被配置为:
    利用飞行时间相机、或飞行时间传感器检测特定区域的实时深度变化;若深度值变化超过预设阈值,投影设备将自动触发防射眼功能。
  10. 如权利要求6所述投影设备,所述控制器还被配置为:基于采集的飞行时间数据、截图数据、以及相机数据分析判断是否需要开启防射眼功能。
  11. 如权利要求6所述投影设备,所述控制器还被配置为:
    若检测到预定对象在特定区域内,将自动启动防射眼功能,以降低光机发出激光强度、降低用户界面显示亮度、并显示安全提示信息。
  12. 如权利要求6所述投影设备,所述控制器还被配置为:
    通过陀螺仪、或陀螺仪传感器对设备移动进行监测;向陀螺仪发出用于查询设备状态的信令,并接收陀螺仪反馈用于判定设备是否发生移动的信令。
  13. 如权利要求6所述投影设备,所述控制器还被配置为:
    在陀螺仪数据稳定预设时间长度后,控制启动触发梯形校正,在梯形校正进行时不响应遥控器按键发出的指令。
  14. 如权利要求6所述投影设备,所述控制器还被配置为:
    通过自动避障算法识别幕布,并利用投影变化,将投影画面校正至幕布内显示,实现与幕布边沿对齐的效果。
  15. 一种用于投影设备的投影显示控制方法,所述方法包括:
    基于所述投影设备的摄像头获取的第一图像灰阶图亮度分析,将第一图像二值化得到第二图像,所述第一图像为环境图像,所述投影设备包括:用于将播放内容投影至所述投影设备对应的幕布的投影组件;
    确定所述第二图像所包含的一级闭合轮廓,所述一级闭合轮廓内含有二级闭合轮廓;
    在判定所述二级闭合轮廓为凸四边形时,将播放内容投影至所述二级闭合轮廓,所述二级闭合轮廓对应于幕布的投影区域;其中,所述幕布包含对应于所述一级闭合轮廓的幕布边缘带,所述投影区域被所述幕布边缘带围绕。
  16. 如权利要求15所述的方法,基于所述第一图像灰阶图亮度分析,将所述第一图像二值化得到所述第二图像包括:
    确定所述第一图像灰阶图中最大亮度占比对应的灰阶值;
    以所述灰阶值为中心选取预设个数的灰阶值作为阈值,对所述第一图像进行二值化,将包含幕布特征的二值化图像作为所述第二图像。
  17. 如权利要求15所述的方法,具体包括:
    获取所述第二图像中包含所述二级闭合轮廓的所有一级闭合轮廓;
    对所述二级闭合轮廓进行多边形拟合,将拟合结果为四边形轮廓的二级闭合轮廓判定为幕布投影区域候选区;
    判定所述幕布投影区域候选区的凹凸性,将结果为凸四边形的所述候选区判定为所述幕布对应的投影区域。
  18. 如权利要求15所述的方法,在第二图像中确定包含二级闭合轮廓为所述幕布对应的投影区域过程中,所述方法还包括:
    在所述二级闭合轮廓中还包括投影生成的三级闭合轮廓时,不对所述三级闭合轮廓进行提取分析。
  19. 如权利要求15所述的方法,在获取环境所在的所述第一图像过程中,具体包括获取所述幕布所在区域的第一图像。
PCT/CN2022/122810 2021-11-16 2022-09-29 一种投影设备及显示控制方法 WO2023087950A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280063192.3A CN118104230A (zh) 2021-11-16 2022-09-29 一种投影设备及显示控制方法

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202111355866.0 2021-11-16
CN202111355866 2021-11-16
CN202210006233.7 2022-01-05
CN202210006233.7A CN114466173A (zh) 2021-11-16 2022-01-05 投影设备及自动投入幕布区域的投影显示控制方法
CN202210583357.1A CN115022606B (zh) 2021-11-16 2022-05-25 一种投影设备及避障投影方法
CN202210583357.1 2022-05-25

Publications (1)

Publication Number Publication Date
WO2023087950A1 true WO2023087950A1 (zh) 2023-05-25

Family

ID=80658581

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/CN2022/122810 WO2023087950A1 (zh) 2021-11-16 2022-09-29 一种投影设备及显示控制方法
PCT/CN2022/132368 WO2023088329A1 (zh) 2021-11-16 2022-11-16 投影设备及投影图像校正方法
PCT/CN2022/132250 WO2023088304A1 (zh) 2021-11-16 2022-11-16 一种投影设备和投影区域修正方法

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/CN2022/132368 WO2023088329A1 (zh) 2021-11-16 2022-11-16 投影设备及投影图像校正方法
PCT/CN2022/132250 WO2023088304A1 (zh) 2021-11-16 2022-11-16 一种投影设备和投影区域修正方法

Country Status (2)

Country Link
CN (12) CN114466173A (zh)
WO (3) WO2023087950A1 (zh)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023087947A1 (zh) * 2021-11-16 2023-05-25 海信视像科技股份有限公司 一种投影设备和校正方法
WO2023087960A1 (zh) * 2021-11-16 2023-05-25 海信视像科技股份有限公司 投影设备及调焦方法
CN118104229A (zh) * 2021-11-16 2024-05-28 海信视像科技股份有限公司 一种投影设备及投影图像的显示控制方法
CN114760454A (zh) * 2022-05-24 2022-07-15 海信视像科技股份有限公司 一种投影设备及触发校正方法
CN115002432A (zh) * 2022-05-30 2022-09-02 海信视像科技股份有限公司 一种投影设备及避障投影方法
CN114466173A (zh) * 2021-11-16 2022-05-10 海信视像科技股份有限公司 投影设备及自动投入幕布区域的投影显示控制方法
CN114640832A (zh) * 2022-02-11 2022-06-17 厦门聚视智创科技有限公司 一种投影图像的自动校正方法
CN115002429B (zh) * 2022-05-07 2023-03-24 深圳市和天创科技有限公司 一种基于摄像头计算自动校准投影位置的投影仪
CN114885142B (zh) * 2022-05-27 2024-05-17 海信视像科技股份有限公司 一种投影设备及调节投影亮度方法
CN115314689A (zh) * 2022-08-05 2022-11-08 深圳海翼智新科技有限公司 投影校正方法、装置、投影仪和计算机程序产品
CN115314691B (zh) * 2022-08-09 2023-05-09 北京淳中科技股份有限公司 一种图像几何校正方法、装置、电子设备及存储介质
CN115061415B (zh) * 2022-08-18 2023-01-24 赫比(成都)精密塑胶制品有限公司 一种自动流程监控方法、设备以及计算机可读存储介质
CN115474032B (zh) * 2022-09-14 2023-10-03 深圳市火乐科技发展有限公司 投影交互方法、投影设备和存储介质
CN115529445A (zh) * 2022-09-15 2022-12-27 海信视像科技股份有限公司 一种投影设备及投影画质调整方法
WO2024066776A1 (zh) * 2022-09-29 2024-04-04 海信视像科技股份有限公司 投影设备及投影画面处理方法
CN115361540B (zh) * 2022-10-20 2023-01-24 潍坊歌尔电子有限公司 投影图像的异常原因自检方法、装置、投影机及存储介质
CN115760620B (zh) * 2022-11-18 2023-10-20 荣耀终端有限公司 一种文档矫正方法、装置及电子设备
CN116723395A (zh) * 2023-04-21 2023-09-08 深圳市橙子数字科技有限公司 一种基于摄像头的无感对焦方法及装置
CN116993879B (zh) * 2023-07-03 2024-03-12 广州极点三维信息科技有限公司 一种自动避障布光的方法、电子设备和存储介质
CN117278735B (zh) * 2023-09-15 2024-05-17 山东锦霖智能科技集团有限公司 一种沉浸式图像投影设备
CN117830437B (zh) * 2024-03-01 2024-05-14 中国科学院长春光学精密机械与物理研究所 一种大视场远距离多目相机内外参数标定装置及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584729A (zh) * 2003-08-22 2005-02-23 日本电气株式会社 图像投影方法和设备
US20080266253A1 (en) * 2007-04-25 2008-10-30 Lisa Seeman System and method for tracking a laser spot on a projected computer screen image
CN102236784A (zh) * 2010-05-07 2011-11-09 株式会社理光 屏幕区域检测方法及系统
CN110769214A (zh) * 2018-08-20 2020-02-07 成都极米科技股份有限公司 基于帧差值的自动跟踪投影方法及装置
US20210152795A1 (en) * 2018-04-17 2021-05-20 Sony Corporation Information processing apparatus and method
CN114466173A (zh) * 2021-11-16 2022-05-10 海信视像科技股份有限公司 投影设备及自动投入幕布区域的投影显示控制方法

Family Cites Families (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005031267A (ja) * 2003-07-09 2005-02-03 Sony Corp 画像投射装置及び画像投射方法
JP2006109088A (ja) * 2004-10-05 2006-04-20 Olympus Corp マルチプロジェクションシステムにおける幾何補正方法
JP4831219B2 (ja) * 2008-10-29 2011-12-07 セイコーエプソン株式会社 プロジェクタおよびプロジェクタの制御方法
CN102681312B (zh) * 2011-03-16 2015-06-24 宏瞻科技股份有限公司 激光投影系统的人眼安全保护系统
JP2013033206A (ja) * 2011-07-06 2013-02-14 Ricoh Co Ltd 投影型表示装置、情報処理装置、投影型表示システム、およびプログラム
CN103293836A (zh) * 2012-02-27 2013-09-11 联想(北京)有限公司 一种投影方法及电子设备
CN103002240B (zh) * 2012-12-03 2016-11-23 深圳创维数字技术有限公司 一种设定避开障碍物投影的方法及设备
JP2015128242A (ja) * 2013-12-27 2015-07-09 ソニー株式会社 画像投影装置及びそのキャリブレーション方法
CN103905762B (zh) * 2014-04-14 2017-04-19 上海索广电子有限公司 投影模块的投影画面自动检查方法
CN103942796B (zh) * 2014-04-23 2017-04-12 清华大学 一种高精度的投影仪‑摄像机标定系统及标定方法
JP2016014712A (ja) * 2014-07-01 2016-01-28 キヤノン株式会社 シェーディング補正値算出装置およびシェーディング補正値算出方法
JP6186599B1 (ja) * 2014-12-25 2017-08-30 パナソニックIpマネジメント株式会社 投影装置
CN104536249B (zh) * 2015-01-16 2016-08-24 努比亚技术有限公司 调节投影仪焦距的方法和装置
CN104835143A (zh) * 2015-03-31 2015-08-12 中国航空无线电电子研究所 一种快速投影机系统参数标定方法
JP2016197768A (ja) * 2015-04-02 2016-11-24 キヤノン株式会社 画像投射システム及び投射画像の制御方法
WO2016194191A1 (ja) * 2015-06-04 2016-12-08 日立マクセル株式会社 投射型映像表示装置および映像表示方法
CN105208308B (zh) * 2015-09-25 2018-09-04 广景视睿科技(深圳)有限公司 一种获取投影仪的最佳投影焦点的方法及系统
CN107547881B (zh) * 2016-06-24 2019-10-11 上海顺久电子科技有限公司 一种投影成像的自动校正方法、装置及激光电视
CN106713879A (zh) * 2016-11-25 2017-05-24 重庆杰夫与友文化创意有限公司 避障投影方法及其装置
KR101820905B1 (ko) * 2016-12-16 2018-01-22 씨제이씨지브이 주식회사 촬영장치에 의해 촬영된 이미지 기반의 투사영역 자동보정 방법 및 이를 위한 시스템
CN109215082B (zh) * 2017-06-30 2021-06-22 杭州海康威视数字技术股份有限公司 一种相机参数标定方法、装置、设备及系统
CN109426060A (zh) * 2017-08-21 2019-03-05 深圳光峰科技股份有限公司 投影仪自动调焦方法及投影仪
CN107479168A (zh) * 2017-08-22 2017-12-15 深圳市芯智科技有限公司 一种能实现快速对焦功能的投影机及对焦方法
KR101827221B1 (ko) * 2017-09-07 2018-02-07 주식회사 조이펀 좌표계 자동 보정이 가능한 혼합현실 콘텐츠 제공 장치 및 이를 이용한 좌표계 자동 보정 방법
CN109856902A (zh) * 2017-11-30 2019-06-07 中强光电股份有限公司 投影装置及自动对焦方法
CN110058483B (zh) * 2018-01-18 2022-06-10 深圳光峰科技股份有限公司 自动对焦系统、投影设备、自动对焦方法及存储介质
CN109544643B (zh) * 2018-11-21 2023-08-11 北京佳讯飞鸿电气股份有限公司 一种摄像机图像校正方法及装置
CN109495729B (zh) * 2018-11-26 2023-02-10 青岛海信激光显示股份有限公司 投影画面校正方法和系统
CN110769225B (zh) * 2018-12-29 2021-11-09 成都极米科技股份有限公司 基于幕布的投影区域获取方法及投影装置
CN110769226B (zh) * 2019-02-27 2021-11-09 成都极米科技股份有限公司 超短焦投影机的对焦方法、对焦装置及可读存储介质
CN110769227A (zh) * 2019-02-27 2020-02-07 成都极米科技股份有限公司 超短焦投影机的对焦方法、对焦装置及可读存储介质
CN110336987B (zh) * 2019-04-03 2021-10-08 北京小鸟听听科技有限公司 一种投影仪畸变校正方法、装置和投影仪
CN110636273A (zh) * 2019-10-15 2019-12-31 歌尔股份有限公司 调整投影画面的方法、装置、可读存储介质及投影仪
CN111028297B (zh) * 2019-12-11 2023-04-28 凌云光技术股份有限公司 面结构光三维测量系统的标定方法
CN111050150B (zh) * 2019-12-24 2021-12-31 成都极米科技股份有限公司 焦距调节方法、装置、投影设备及存储介质
CN111050151B (zh) * 2019-12-26 2021-08-17 成都极米科技股份有限公司 投影对焦的方法、装置、投影仪和可读存储介质
CN111311686B (zh) * 2020-01-15 2023-05-02 浙江大学 一种基于边缘感知的投影仪失焦校正方法
CN113554709A (zh) * 2020-04-23 2021-10-26 华东交通大学 一种基于偏振信息的相机-投影仪系统标定方法
CN111429532B (zh) * 2020-04-30 2023-03-31 南京大学 一种利用多平面标定板提高相机标定精确度的方法
CN113301314B (zh) * 2020-06-12 2023-10-24 阿里巴巴集团控股有限公司 对焦方法、投影仪、成像设备和存储介质
CN112050751B (zh) * 2020-07-17 2022-07-22 深圳大学 一种投影仪标定方法、智能终端及存储介质
CN111932571B (zh) * 2020-09-25 2021-01-22 歌尔股份有限公司 图像的边界识别方法、装置以及计算机可读存储介质
CN112584113B (zh) * 2020-12-02 2022-08-30 深圳市当智科技有限公司 基于映射校正的宽屏投影方法、系统及可读存储介质
CN112598589A (zh) * 2020-12-17 2021-04-02 青岛海信激光显示股份有限公司 激光投影系统及图像校正方法
CN112904653A (zh) * 2021-01-26 2021-06-04 四川长虹电器股份有限公司 用于投影设备的调焦方法和调焦装置
CN112995624B (zh) * 2021-02-23 2022-11-08 峰米(北京)科技有限公司 用于投影仪的梯形误差校正方法及装置
CN112995625B (zh) * 2021-02-23 2022-10-11 峰米(北京)科技有限公司 用于投影仪的梯形校正方法及装置
CN112689136B (zh) * 2021-03-19 2021-07-02 深圳市火乐科技发展有限公司 投影图像调整方法、装置、存储介质及电子设备
CN113099198B (zh) * 2021-03-19 2023-01-10 深圳市火乐科技发展有限公司 投影图像调整方法、装置、存储介质及电子设备
CN112804507B (zh) * 2021-03-19 2021-08-31 深圳市火乐科技发展有限公司 投影仪校正方法、系统、存储介质以及电子设备
CN113038105B (zh) * 2021-03-26 2022-10-18 歌尔股份有限公司 投影仪的调整方法和调整设备
CN113160339B (zh) * 2021-05-19 2024-04-16 中国科学院自动化研究所苏州研究院 一种基于沙姆定律的投影仪标定方法
CN113286134A (zh) * 2021-05-25 2021-08-20 青岛海信激光显示股份有限公司 图像校正方法及拍摄设备
CN113473095B (zh) * 2021-05-27 2022-10-21 广景视睿科技(深圳)有限公司 一种避障动向投影的方法和设备
CN113489961B (zh) * 2021-09-08 2022-03-22 深圳市火乐科技发展有限公司 投影校正方法、装置、存储介质和投影设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584729A (zh) * 2003-08-22 2005-02-23 日本电气株式会社 图像投影方法和设备
US20080266253A1 (en) * 2007-04-25 2008-10-30 Lisa Seeman System and method for tracking a laser spot on a projected computer screen image
CN102236784A (zh) * 2010-05-07 2011-11-09 株式会社理光 屏幕区域检测方法及系统
US20210152795A1 (en) * 2018-04-17 2021-05-20 Sony Corporation Information processing apparatus and method
CN110769214A (zh) * 2018-08-20 2020-02-07 成都极米科技股份有限公司 基于帧差值的自动跟踪投影方法及装置
CN114466173A (zh) * 2021-11-16 2022-05-10 海信视像科技股份有限公司 投影设备及自动投入幕布区域的投影显示控制方法

Also Published As

Publication number Publication date
CN118077192A (zh) 2024-05-24
CN115174877B (zh) 2024-05-28
CN114466173A (zh) 2022-05-10
CN114885137A (zh) 2022-08-09
CN114885138A (zh) 2022-08-09
WO2023088304A1 (zh) 2023-05-25
CN114885136B (zh) 2024-05-28
CN114885136A (zh) 2022-08-09
CN115022606A (zh) 2022-09-06
CN115022606B (zh) 2024-05-17
WO2023088329A1 (zh) 2023-05-25
CN114827563A (zh) 2022-07-29
CN115174877A (zh) 2022-10-11
CN118104230A (zh) 2024-05-28
CN118104231A (zh) 2024-05-28
CN114401390A (zh) 2022-04-26
CN114727079A (zh) 2022-07-08
CN114205570A (zh) 2022-03-18

Similar Documents

Publication Publication Date Title
WO2023087950A1 (zh) 一种投影设备及显示控制方法
WO2023087947A1 (zh) 一种投影设备和校正方法
WO2019174435A1 (zh) 投射器及其检测方法和装置、图像获取装置、电子设备、可读存储介质
CN105912145A (zh) 一种激光笔鼠标系统及其图像定位方法
US20160216778A1 (en) Interactive projector and operation method thereof for determining depth information of object
JP2013064827A (ja) 電子機器
CN115002433A (zh) 投影设备及roi特征区域选取方法
CN115002432A (zh) 一种投影设备及避障投影方法
CN114866751A (zh) 一种投影设备及触发校正方法
JP2012181264A (ja) 投影装置、投影方法及びプログラム
WO2023088303A1 (zh) 投影设备及避障投影方法
CN100403785C (zh) 投射系统及投射器
CN116320335A (zh) 一种投影设备及调整投影画面尺寸的方法
US11950339B2 (en) Lighting apparatus, and corresponding system, method and computer program product
CN116339049A (zh) Tof传感器和基于tof传感器的投影矫正方法及系统
CN116055696A (zh) 一种投影设备及投影方法
CN114760454A (zh) 一种投影设备及触发校正方法
WO2023087951A1 (zh) 一种投影设备及投影图像的显示控制方法
CN109104597A (zh) 投影装置、投影方法以及记录介质
CN114928728A (zh) 投影设备及异物检测方法
CN114885142B (zh) 一种投影设备及调节投影亮度方法
WO2023087960A1 (zh) 投影设备及调焦方法
WO2024066776A9 (zh) 投影设备及投影画面处理方法
US20230102878A1 (en) Projector and projection method
CN118158367A (zh) 一种投影设备及投影画面入幕方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22894488

Country of ref document: EP

Kind code of ref document: A1