WO2023087947A1 - 一种投影设备和校正方法 - Google Patents
一种投影设备和校正方法 Download PDFInfo
- Publication number
- WO2023087947A1 WO2023087947A1 PCT/CN2022/122670 CN2022122670W WO2023087947A1 WO 2023087947 A1 WO2023087947 A1 WO 2023087947A1 CN 2022122670 W CN2022122670 W CN 2022122670W WO 2023087947 A1 WO2023087947 A1 WO 2023087947A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- projection
- correction
- area
- projection device
- image
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
Definitions
- the present application relates to the technical field of projection equipment, and in particular to a projection equipment and a calibration method.
- a projection device is a display device that can project images or video onto a screen.
- the projection device can project the laser light of a specific color onto the screen through the refraction of the optical lens assembly to form a specific image.
- the projection device When using the projection device, if the position of the projection device is shifted, or the projection device is not perpendicular to the projection surface, the projection device will not be able to completely project the image to the preset projection area, resulting in a trapezoidal projection screen. The user experience is poor.
- the existing projection device corrects the image, the user can control the projection device to adjust the projection angle, thereby controlling the display position and shape of the projected image and realizing image correction.
- this method requires the user to manually select and adjust the direction, the process is cumbersome and cannot accurately correct the projected image, and the experience for the user is poor.
- the present application provides a projection device, including: an optical machine configured to project playback content to a projection surface; a camera configured to capture images displayed on the projection surface; a controller configured to: Responding to an image correction instruction, controlling the optical machine to project a calibration map onto the projection surface, and controlling the camera to photograph the calibration map to obtain a corrected image; the calibration map contains calibration feature points; Determining the first position of the corrected feature point in the corrected image, and obtaining a projection relationship according to the first position; the projection relationship is the playback content of the optical machine projection, and the distance between the projection surface Mapping relationship; determine target position information of the area to be projected in the projection surface according to the projection relationship; control the optical machine to project and play content to the area to be projected according to the target position information.
- the present application provides an image correction method for a projection device, the method comprising: in response to an image correction instruction, controlling the optical-mechanical projection correction chart to the projection surface, and controlling the camera Photographing the calibration chart to obtain a calibration image; the calibration chart includes calibration feature points; determining a first position of the calibration feature point in the calibration image, and obtaining a projection according to the first position relationship; the projection relationship is the mapping relationship between the playback content of the optical machine projection and the projection surface; according to the projection relationship, determine the target position information of the area to be projected in the projection surface; according to the target The position information is used to control the optical machine to project and play content to the area to be projected.
- Fig. 1 is a schematic diagram of placement of projection equipment in some embodiments
- Fig. 2 is a schematic diagram of an optical path of a projection device in some embodiments
- FIG. 3 is a schematic diagram of a circuit architecture of a projection device in some embodiments.
- Fig. 4 is a schematic structural diagram of a projection device in some embodiments.
- FIG. 5 is a schematic diagram of a circuit structure of a projection device in some embodiments.
- Fig. 6 is a schematic diagram when the position of the projection device changes in some embodiments.
- Fig. 7 is a schematic diagram of a system framework for realizing display control by a projection device in some embodiments.
- Fig. 8 is an interaction flowchart of components of a projection device in some embodiments.
- Fig. 9 is a schematic diagram of a calibration chart in some embodiments.
- Figure 10 is a schematic diagram of a calibration chart in some embodiments.
- Fig. 11 is a schematic flow chart of obtaining a projection relationship in some embodiments.
- Figure 12 is a schematic diagram of a correction chart in some embodiments.
- Fig. 13 is a schematic diagram before and after image correction in some embodiments.
- Figure 14 is a schematic diagram of HSV values for each color in some embodiments.
- Figure 15 is a schematic diagram of gray value curves in some embodiments.
- Figure 16 is a schematic diagram of gray value curves in some embodiments.
- Figure 17 is a schematic diagram of gray value curves in some embodiments.
- Fig. 18 is a schematic flow chart of an embodiment of an image correction method
- Fig. 19A is a schematic diagram of the signaling interaction sequence of the projection device realizing the emissive eye function according to another embodiment of the present application.
- FIG. 19B is a schematic diagram of a signaling interaction sequence for realizing a display image correction function of a projection device according to another embodiment of the present application.
- FIG. 19C is a schematic flow diagram of a projection device implementing an auto-focus algorithm according to another embodiment of the present application.
- FIG. 19D is a schematic flow diagram of a projection device implementing trapezoidal correction and obstacle avoidance algorithms according to another embodiment of the present application.
- FIG. 19E is a schematic flow diagram of a projection device implementing a screen entry algorithm according to another embodiment of the present application.
- FIG. 19F is a schematic flow diagram of a projection device implementing an anti-eye algorithm according to another embodiment of the present application.
- FIG. 20 is a schematic diagram of the lens structure of the projection device in the embodiment of the present application.
- FIG. 21 is a schematic structural diagram of a distance sensor and a camera of a projection device in an embodiment of the present application.
- FIG. 22 is a schematic diagram of a projection device triggering image correction based on distance data in an embodiment of the present application.
- FIG. 23 is a schematic diagram of image correction triggered by a projection device based on acceleration data in an embodiment of the present application.
- a projection device will be taken as an example for illustration.
- a projection device is a device that can project images or videos onto a screen.
- the projection device can communicate with computers, radio and television networks, the Internet, VCD (Video Compact Disc), DVD (Digital Versatile Disc Recordable) through different interfaces. : Digital Video Disc), game consoles, DV, etc. are connected to play corresponding video signals.
- Projection equipment is widely used in homes, offices, schools and entertainment venues, etc.
- Fig. 1 is a schematic diagram of placement of projection equipment in some embodiments.
- a projection device provided by the present application includes a projection screen 1 and a device 2 for projection.
- the projection screen 1 is fixed on the first position, and the projection device 2 used is placed on the second position, so that the picture projected by it coincides with the projection screen 1 .
- Fig. 2 is a schematic diagram of an optical path of a projection device in some embodiments.
- An embodiment of the present application provides a projection device, including a laser light source 100 , an optical machine 200 , a lens 300 , and a projection medium 400 .
- the laser light source 100 provides illumination for the light machine 200
- the light machine 200 modulates the light beam, and outputs it to the lens 300 for imaging, and projects it to the projection medium 400 to form a projection picture.
- the laser light source 100 of the projection device includes a laser component and an optical lens component, and the light beam emitted by the laser component can pass through the optical lens component to provide illumination for the optical machine.
- optical lens components require a higher level of environmental cleanliness and airtight level sealing; while the chamber where the laser component is installed can be sealed with a lower level of dustproof level to reduce sealing costs.
- the light engine 200 of the projection device may be implemented to include a blue light engine, a green light engine, and a red light engine, and may also include a heat dissipation system, a circuit control system, and the like. It should be noted that, in some embodiments, the light emitting component of the projection device may also be realized by an LED light source.
- the present application provides a projection device, including a three-color light engine and a controller; wherein, the three-color light engine is used to modulate and generate laser light containing pixels in a user interface, including a blue light engine, a green light engine, and a red light engine.
- the controller is configured to: obtain the average gray value of the user interface; when it is determined that the average gray value is greater than the first threshold and its duration is greater than the time threshold, control the operating current value of the red optical machine according to The preset gradient value is lowered to reduce the heating of the three-color light engine. It can be found that by reducing the operating current of the red light engine integrated in the three-color light engine, the overheating of the red light engine can be controlled, so as to control the overheating of the three-color light engine and the projection device.
- the light machine 200 can be implemented as a three-color light machine, and the three-color light machine integrates a blue light machine, a green light machine, and a red light machine.
- the embodiment of the present application will be described by taking the light machine 200 of the projection device as an example including a blue light machine, a green light machine, and a red light machine.
- the optical system of the projection device is composed of a light source part and an optical machine part.
- the function of the light source part is to provide illumination for the light machine, and the function of the light machine part is to modulate the illumination beam provided by the light source, and finally form Projected screen.
- FIG. 3 is a schematic diagram of a circuit architecture of a projection device in some embodiments.
- the projection device may include a display control circuit 10, a laser light source 20, at least one laser driving component 30, and at least one brightness sensor 40, and the laser light source 20 may include a one-to-one corresponding At least one laser.
- the at least one refers to one or more, and a plurality refers to two or more.
- the projection device can realize adaptive adjustment. For example, by setting the brightness sensor 40 in the light output path of the laser light source 20 , the brightness sensor 40 can detect the first brightness value of the laser light source and send the first brightness value to the display control circuit 10 .
- the display control circuit 10 can obtain the second brightness value corresponding to the driving current of each laser, and when it is determined that the difference between the second brightness value of the laser and the first brightness value of the laser is greater than the difference threshold, determine that the laser When a COD (Catastrophic optical damage) fault occurs; the display control circuit can adjust the current control signal of the corresponding laser drive component of the laser until the difference is less than or equal to the difference threshold, thereby eliminating the COD fault of the blue laser; the projection The equipment can eliminate the COD failure of the laser in time, reduce the damage rate of the laser, and improve the image display effect of the projection equipment.
- COD Catastrophic optical damage
- the laser light source 20 includes three lasers that correspond one-to-one to the laser driving assembly 30 , and the three lasers may be a blue laser 201 , a red laser 202 and a green laser 203 respectively.
- the blue laser 201 is used to emit blue laser
- the red laser 202 is used to emit red laser
- the green laser 203 is used to emit green laser.
- the laser driving component 30 may be implemented to include a plurality of sub-laser driving components corresponding to lasers of different colors.
- the display control circuit 10 is used to output the primary color enable signal and the primary color current control signal to the laser driving component 30 to drive the laser to emit light.
- the display control circuit 10 is connected with the laser driving component 30, and is used to output at least one enabling signal corresponding to the three primary colors of each frame of the multi-frame display image, and transmit at least one enabling signal to the corresponding
- the laser driving component 30 outputs at least one current control signal corresponding to the three primary colors of each frame of image, and transmits the at least one current control signal to the corresponding laser driving component 30 respectively.
- the display control circuit 10 may be a microcontroller unit (microcontroller unit, MCU), also known as a single-chip microcomputer.
- the current control signal may be a pulse width modulation (pulse width modulation, PWM) signal.
- the blue laser 201 , the red laser 202 and the green laser 203 are respectively connected to the laser driving assembly 30 .
- the laser driving component 30 can provide corresponding driving current to the blue laser 201 in response to the blue PWM signal and the enable signal sent by the display control circuit 10 .
- the blue laser 201 is used to emit light under the driving current.
- the brightness sensor is arranged in the light output path of the laser light source, usually on one side of the light output path, without blocking the light path.
- at least one brightness sensor 40 is arranged in the light path of the laser light source 20, and each brightness sensor is connected with the display control circuit 10 for detecting the first brightness value of a laser, and the first brightness value sent to the display control circuit 10.
- the display control circuit 10 can obtain the second brightness value corresponding to the driving current of each laser from the corresponding relationship, the driving current is the current actual working current of the laser, and the second brightness value corresponding to the driving current is the brightness value that the laser can emit when it works normally under the driving current.
- the difference threshold may be a fixed value pre-stored in the display control circuit 10 .
- the display control circuit 10 when the display control circuit 10 adjusts the current control signal of the laser drive component 30 corresponding to the laser, it can reduce the duty cycle of the current control signal of the laser drive component 30 corresponding to the laser, thereby reducing the driving force of the laser. current.
- the brightness sensor 40 can detect the first brightness value of the blue laser 201 and send the first brightness value to the display control circuit 10 .
- the display control circuit 10 can obtain the driving current of the blue laser 201, and obtain the second brightness value corresponding to the driving current from the corresponding relationship between the current and the brightness value. Then detect whether the difference between the second luminance value and the first luminance value is greater than the difference threshold, if the difference is greater than the difference threshold, it indicates that the blue laser 201 has a COD failure, and the display control circuit 10 can reduce the difference with the difference threshold.
- the current control signal of the laser driving component 30 corresponding to the blue laser 201 .
- the display control circuit 10 can acquire the first luminance value of the blue laser 201 and the second luminance value corresponding to the driving current of the blue laser 201 again, and the difference between the second luminance value and the first luminance value is greater than
- the difference threshold is reached, the current control signal of the laser driving component 30 corresponding to the blue laser 201 is lowered again. This loops until the difference is less than or equal to the difference threshold. Therefore, by reducing the driving current of the blue laser 201 , the COD failure of the blue laser 201 is eliminated.
- Fig. 4 is a schematic structural diagram of a projection device in some embodiments.
- the laser light source 20 in the projection device may include a blue laser 201, a red laser 202 and a green laser 203 which are set independently, and the projection device may also be called a three-color projection device, the blue laser 201, the red laser 201
- the laser 202 and the green laser 203 are both Mirai Console Loader (MCL) packaged lasers, which are small in size and conducive to compact arrangement of optical paths.
- MCL Mirai Console Loader
- the controller includes a central processing unit (Central Processing Unit, CPU), a video processor, an audio processor, a graphics processing unit (Graphics Processing Unit, GPU), RAM (Random Access Memory, RAM), ROM (Read -Only Memory, ROM), at least one of the first interface to the nth interface for input/output, a communication bus (Bus), and the like.
- CPU Central Processing Unit
- video processor video processor
- audio processor audio processor
- graphics processing unit GPU
- RAM Random Access Memory
- ROM Read -Only Memory
- the projection device may be configured with a camera for cooperating with the projection device to realize adjustment and control of the projection process.
- the camera configured by the projection device can be specifically implemented as a 3D camera, or a binocular camera; when the camera is implemented as a binocular camera, it specifically includes a left camera and a right camera; the binocular camera can obtain the corresponding screen of the projection device, that is, the projection
- the image and playback content presented on the surface are projected by the built-in optical machine of the projection device.
- the projection device controller can be based on the image captured by the camera, through The correct display of the included angle between the projection surfaces of the coupled light machine and the projected image realizes automatic keystone correction.
- Fig. 5 is a schematic diagram of a circuit structure of a projection device in some embodiments.
- the laser driving component 30 may include a driving circuit 301 , a switching circuit 302 and an amplifying circuit 303 .
- the driving circuit 301 may be a driving chip.
- the switch circuit 302 may be a metal-oxide-semiconductor (MOS) transistor.
- MOS metal-oxide-semiconductor
- the driving circuit 301 is used to output the driving current to the corresponding laser in the laser light source 20 through the VOUT terminal based on the current control signal sent by the display control circuit 10 , and transmit the received enabling signal to the switch circuit 302 through the ENOUT terminal.
- the laser may include n sub-lasers connected in series, which are respectively sub-lasers LD1 to LDn. n is a positive integer greater than 0.
- the switch circuit 302 is connected in series in the current path of the laser, and is used to control the conduction of the current path when the received enable signal is at an effective potential.
- the amplifying circuit 303 is respectively connected to the detection node E in the current path of the laser light source 20 and the display control circuit 10, and is used to convert the detected driving current of the laser component into a driving voltage, amplify the driving voltage, and convert the amplified driving current The voltage is transmitted to the display control circuit 10 .
- the display control circuit 10 is further configured to determine the amplified driving voltage as the driving current of the laser, and obtain a second brightness value corresponding to the driving current.
- the amplifying circuit 303 may include: a first operational amplifier A1, a first resistor (also known as a sampling power resistor) R1, a second resistor R2, a third resistor R3 and a fourth resistor R4.
- the display control circuit 10, the drive circuit 301, the switch circuit 302, and the amplifier circuit 303 form a closed loop to realize the feedback adjustment of the driving current of the laser, so that the display control circuit 10 can pass the laser second
- the difference between the luminance value and the first luminance value adjusts the driving current of the laser in time, that is, adjusts the actual luminance of the laser in time, avoids long-term COD failure of the laser, and improves the accuracy of laser luminescence control.
- the laser light source 20 includes a blue laser, a red laser and a green laser.
- the blue laser 201 can be set at the L1 position
- the red laser 202 can be set at the L2 position
- the green laser 203 can be set at the L3 position.
- the laser light at the position L1 is transmitted once through the fourth dichroic film 604 , is reflected once at the fifth dichroic film 605 , and then enters the first lens 901 .
- the light efficiency P1 Pt ⁇ Pf at the L1 position.
- Pt represents the transmittance of the dichroic plate
- Pf represents the reflectance of the dichroic plate or the fifth reflectance.
- the light efficiency of the laser light at the position L3 is the highest, and the light efficiency of the laser light at the position L1 is the lowest.
- the maximum optical power Pb output by the blue laser 201 is 4.5 watts (W)
- the maximum optical power Pr output by the red laser 202 is 2.5W
- the maximum optical power Pg output by the green laser 203 is 1.5W. That is, the maximum optical power output by the blue laser 201 is the largest, followed by the maximum optical power output by the red laser 202 , and the maximum optical power output by the green laser 203 is the smallest.
- the green laser 203 is therefore placed at the L3 position, the red laser 202 is placed at the L2 position, and the blue laser 201 is placed at the L1 position. That is, the green laser 203 is arranged in the optical path with the highest light efficiency, so as to ensure that the projection device can obtain the highest light efficiency.
- the projection device after the projection device is started, it can directly enter the display interface of the signal source selected last time, or the signal source selection interface, wherein the signal source can be a preset video-on-demand program, or an HDMI interface, a live TV interface
- the projector can display the content obtained from different signal sources.
- the projection device can project the content preset by the user onto the projection surface, which can be a wall or a curtain, and the projection image can be displayed on the projection surface for the user to watch.
- the projection device and the projection surface may not be perpendicular to each other, causing the projected image to be displayed as a trapezoidal image or other deformed images.
- Fig. 6 is a schematic diagram when the position of the projection device changes in some embodiments.
- the projection device starts at position A and can project onto a suitable projection area, which is rectangular in shape and can usually completely and accurately cover the corresponding rectangular screen.
- deformed projected images are often generated, such as trapezoidal images, which cause problems that the projected images do not match the rectangular screen.
- the user can manually control the projection device to adjust the projection angle, so as to control the display position and shape of the projected image and realize image correction.
- this manual correction method is cumbersome and cannot accurately correct the projected image.
- the projection device can also be based on a deep learning neural network to realize automatic keystone correction by coupling the angle between the projection surfaces of the light machine and the correct display of the projected image.
- the correction speed of this correction method is slow, and a large amount of scene data is required for model training to achieve a certain accuracy, and it is not suitable for instant and fast correction scenes that occur on user equipment.
- the projection device provided by the embodiment of the present application can perform image correction quickly and accurately, so as to solve the above problems.
- a projection device may include an optical engine, a camera, and a controller as described above.
- the optical machine is used to project the playback content preset by the user onto the projection surface
- the projection surface can be a wall or a curtain.
- the camera is used to capture images displayed on the projection surface, and may be a camera.
- the camera can include a lens assembly, and a photosensitive element and a lens are arranged in the lens assembly.
- the lens refracts light through a plurality of mirrors, so that the light of the image of the scene can be irradiated on the photosensitive element.
- the photosensitive element can choose the detection principle based on charge-coupled device or complementary metal oxide semiconductor according to the specifications of the camera, convert the optical signal into an electrical signal through the photosensitive material, and output the converted electrical signal into image data.
- the image captured by the camera can be used for image correction of the projection device.
- the projection device has an image correction function.
- the projection device can automatically correct the projected image to obtain a regular-shaped image, which can be a rectangular image, thereby realizing image correction.
- the projection device may activate an image correction function to correct the projected image.
- the image correction instruction refers to a control instruction used to trigger the projection device to automatically perform image correction.
- the image correction instruction may be an instruction actively input by the user. For example, after the projection device is powered on, the projection device can project an image on the projection surface. At this time, the user can press the image correction switch preset in the projection device, or the image correction button on the remote control of the projection device, so that the projection device turns on the image correction function, and automatically performs image correction on the projected image.
- the image correction instruction can also be automatically generated according to the built-in control program of the projection device.
- the projection device actively generates the image correction instruction after it is turned on. For example, when the projection device detects that a video signal is input for the first time after being turned on, it may generate an image correction instruction, thereby triggering the image correction function.
- the projection device may also be that the projection device generates the image correction instruction by itself during the working process. Considering that the user may actively move the projection device or accidentally touch the projection device during the use of the projection device, resulting in changes in the posture or installation position of the projection device, the projected image may also become a trapezoidal image. At this time, in order to ensure the user's viewing experience, the projection device can automatically perform image correction.
- the projection device can detect its own situation in real time.
- the projection device detects that its posture or setting position has changed, it can generate an image correction instruction, thereby triggering the image correction function.
- the projection device can monitor the movement of the device in real time through its configuration components, and immediately feed back the monitoring results to the projection device controller, so that after the projection device moves, the controller immediately starts the image correction function and realizes it in the first time
- the projected image is automatically corrected.
- the controller receives monitoring data from the gyroscope and TOF sensor to determine whether the projection device moves.
- the controller can generate an image correction command and enable the image correction function.
- the time-of-flight sensor realizes distance measurement and position movement monitoring by adjusting the frequency of the transmitted pulse. Its measurement accuracy will not decrease with the increase of the measurement distance, and it has strong anti-interference ability.
- Fig. 7 is a schematic diagram of a system framework for implementing display control by a projection device in some embodiments.
- the projection device provided by the present application has the characteristics of telephoto micro-projection.
- the projection device includes a controller, and the controller can control the display of the optical-mechanical image through a preset algorithm, so as to realize automatic keystone correction of the display image. , automatic screen entry, automatic obstacle avoidance, automatic focus, and anti-eye shooting functions. It can be understood that the projection device can realize flexible position movement in the telephoto micro-projection scene through the display control method based on geometric correction; From problems such as screen abnormality, the controller can control the projection device to realize the automatic display correction function, so that it can automatically return to normal display.
- the geometric correction-based display control system includes an application program service layer (APK Service: Android application package Service), a service layer, and an underlying algorithm library.
- API Service Android application package Service
- the application service layer is used to realize the interaction between the projection device and the user; based on the display of the user interface, the user can configure various parameters of the projection device and the display screen, and the controller coordinates and calls the algorithm services corresponding to various functions , which can realize the function of automatically correcting the display screen of the projection device when the display is abnormal.
- the service layer can include calibration service, camera service, time-of-flight (TOF) service, etc.
- the service layer corresponds to the application program service layer (APK Service) to realize the corresponding specific functions of different configuration services of the projection device; the service layer connects to the algorithm library downward , cameras, time-of-flight sensors and other data acquisition services to realize the complex logic of the underlying package.
- APIK Service application program service layer
- the underlying algorithm library provides correction services and control algorithms for various functions of projection equipment.
- the algorithm library can complete various mathematical operations based on OpenCV (based on licensed open source) to provide basic computing capabilities for calibration services; OpenCV is an open source release based on BSD license A cross-platform computer vision and machine learning software library that can run on a variety of existing operating system environments.
- the projection device is configured with a gyroscope sensor; during the movement of the device, the gyroscope sensor can sense the position movement and actively collect movement data; then the collected data is sent to the application service layer through the system framework layer, It is used to support the application data required in the process of user interface interaction and application program interaction, and the collected data can also be used for data calls by the controller in the implementation of algorithm services.
- the projection device is configured with a time-of-flight (TOF) sensor, and after the time-of-flight sensor collects corresponding data, the data will be sent to the corresponding time-of-flight service of the service layer;
- TOF time-of-flight
- time-of-flight service After the above-mentioned time-of-flight service acquires data, it sends the collected data to the application program service layer through the process communication framework, and the data will be used for interactive use such as controller data calls, user interfaces, and program applications.
- the projection device is configured with a camera for collecting images, and the camera may be implemented as a binocular camera, or a depth camera, or a 3D camera, etc.;
- the data collected by the camera will be sent to the camera service, and then the camera service will send the collected image data to the process communication framework and/or the projection device correction service; the projection device correction service can receive the camera collection data sent by the camera service, and the controller will
- the different functions that need to be implemented can call the corresponding control algorithm in the algorithm library.
- data interaction is performed with the application service through the process communication framework, and then the calculation result is fed back to the correction service through the process communication framework; the correction service sends the obtained calculation result to the operating system of the projection device to generate a control signal command, and send the control signal to the optical-mechanical control driver to control the optical-mechanical working conditions and realize the automatic correction of the displayed image.
- the projection device may correct the projected image.
- the relationship among the distance, the horizontal angle, and the offset angle can be established in advance. Then the controller of the projection device obtains the current distance from the light machine to the projection surface, and determines the angle between the light machine and the projection surface at this moment in combination with the association relationship, so as to realize the projection image correction.
- the included angle is specifically implemented as the included angle between the central axis of the optical machine and the projection plane.
- the projection device can re-determine the position of the area to be projected, so as to project the playback content to the area to be projected, and realize the correction of the projected image.
- Fig. 8 is a flow diagram of interactions between components of a projection device in some embodiments.
- step 1 obtain image correction instruction; step 2, start shooting; step 3, correct image; step 4, start projection; step 5, play content.
- the projection device may perform keystone correction on the projected image.
- a preset keystone correction algorithm can be used to implement a correction function on the projected image.
- the projection device can project a correction map on the projection surface, and determine the position information of the projection surface according to the correction map.
- the projection relationship between the projection surface and the projection device can be determined.
- the projection relationship refers to the projection relationship in which the projection device projects an image onto the projection surface, specifically, the mapping relationship between the playback content of the projection device's optical-mechanical projection and the projection surface.
- the projection device After determining the projection relationship between the projection surface and the projection device, the projection device can determine the position information of the area to be projected, so as to perform projection and realize image correction.
- the transformation matrix between the projection plane in the world coordinate system and the optical-mechanical coordinate system can be constructed based on the binocular camera.
- the homography relationship between them which is also called the projection relationship, can be used to realize arbitrary shape conversion between the projected image and the playing card.
- the projection device may first project the correction chart.
- the controller can control the optical machine to project the calibration chart onto the projection surface.
- the controller can also control the camera to shoot the calibration chart displayed on the projection surface to obtain a calibration image.
- the calibration chart can contain multiple calibration feature points, so the calibration image captured by the camera will also contain all the calibration feature points in the calibration chart.
- the location information of the projection surface can be determined by correcting the feature points. It should be noted that, for a plane, after the positions of three points in the plane are determined, the position information of the plane can be determined. Therefore, in order to determine the position information of the projection plane, it is necessary to determine the positions of at least three points in the projection plane, that is, at least three calibration feature points need to be included in the calibration chart.
- the location information of the projection surface can be determined according to the at least three calibration feature points.
- the calibration chart may contain patterns and color features preset by the user.
- the calibration chart can be a checkerboard chart, and the checkerboard chart is set as a black and white checkerboard. As shown in FIG. 9 , the calibration feature points contained in the checkerboard chart are corner points of a rectangle.
- the pattern in the correction chart card can also be set as a donut chart card, including a ring pattern, as shown in Figure 10, the calibration feature points contained in the donut chart card are the corresponding solid points on each ring.
- the calibration chart can also be set as a combination of the above two types of patterns, or can also be set as other patterns with identifiable feature points.
- Fig. 11 is a schematic flowchart of obtaining a projection relationship in some embodiments.
- the controller may control the camera to photograph the calibration chart to obtain a calibration image, so as to obtain the positions of the calibration feature points.
- the camera may be a binocular camera, and one camera is arranged on both sides of the optical machine.
- the calibration map is captured by the binocular camera, the left camera captures the first calibration image, and the right camera captures the second calibration image.
- the controller may perform image recognition processing on the corrected image to obtain the first position of the corrected feature point in the corrected image.
- the first position refers to the coordinate information of the corrected feature point in the image coordinate system corresponding to the corrected image.
- the image coordinate system refers to the coordinate system with the center of the image as the coordinate origin and the X and Y axes parallel to the two sides of the image.
- the image coordinate system in the embodiment of the present application can be set as follows: for the projection area preset by the user, the center point of the projection area can be set as the origin, the horizontal direction is the X axis, and the vertical direction is the Y axis.
- the high image coordinate system can be set in advance according to the preset projection area.
- the coordinate information of the corrected feature points in the corrected image can be determined according to the image coordinate system.
- its first position in the first calibration image captured by the left camera may be different from its first position in the second calibration image captured by the right camera.
- the coordinates of the calibration feature point in the camera coordinate system of any one of the left and right cameras can be determined through the two first positions of the same calibration feature point.
- the camera coordinate system is specifically: a direct coordinate system in which the light point of the camera is the center, the optical axis is the Z axis, and the XOY plane is parallel to the projection plane to establish a space.
- the left camera is used as an example for introduction.
- the coordinate information of the calibration feature point in the camera coordinate system of the left camera can be determined, which is set as P(x, y, z) .
- the controller can convert the coordinate information into the coordinates of the calibration feature point in the optical-mechanical coordinate system.
- the optical-mechanical coordinate system is specifically: a direct coordinate system in which a space is established with the optical point of the optical machine as the center, the optical axis as the Z-axis, and parallel to the projection plane as the XOY plane.
- the optical-mechanical coordinate system and the camera coordinate system can be converted to each other, so the coordinates of the corrected feature points in the camera coordinate system can be converted into coordinates in the optical-mechanical coordinate system.
- the calibration feature points can be converted between the two coordinate systems according to the external parameters between the optical machine and the camera.
- the external parameters between the optical machine and the camera are the equipment parameters that have been marked on the equipment shell or the manual when the projection equipment is manufactured. It is usually set based on the function, assembly, manufacture, and parts of the projection equipment, and is applicable to all projection equipment of the same model. , which can include the rotation matrix and translation matrix between the optical machine and the camera.
- the conversion relationship between the optical-mechanical coordinate system and the camera coordinate system can be determined, and the coordinates of the corrected feature points in the optical-mechanical coordinate system can be obtained further.
- P'(x', y', z') is the coordinates of the calibration feature points in the optical-mechanical coordinate system.
- RRR is the rotation matrix between the optical machine and the camera
- TTT is the translation matrix between the optical machine and the camera
- the projection surface equation of the projection surface in the optical-mechanical coordinate system can be determined.
- Step S1102 obtaining the first coordinates of the calibration feature point in the camera coordinate system according to the first position.
- the controller can acquire at least the first positions of at least three correction feature points in the correction image, and determine the first coordinates in the camera coordinate system according to the first positions of these correction feature points.
- Step S1103 further converting the first coordinates into second coordinates of the corrected feature points in the optical-mechanical coordinate system.
- Step S1104 after determining the second coordinates of at least three calibration feature points in the optical-mechanical coordinate system, the controller can fit the coordinates of these calibration feature points, so as to obtain the projection of the projection surface in the optical-mechanical coordinate system surface equation.
- the projection surface equation can be expressed as:
- Step S1105 after determining the projection surface equation of the projection surface in the optical-mechanical coordinate system, the controller can obtain the conversion matrix between the optical-mechanical coordinate system and the world coordinate system according to the projection surface equation, and the conversion matrix is used to represent the projection relationship.
- the world coordinate system is set as follows: the image coordinate system is the XOY plane, that is, the projection plane is the XOY plane, and the origin is the center point of the projection area preset by the user. Set the Z-axis in the direction perpendicular to the projection plane to establish a space coordinate system.
- the controller can respectively determine the representation of the projection surface in the world coordinate system and the representation of the projection surface in the optical-mechanical coordinate system.
- the controller may first determine the unit normal vector of the projection surface in the world coordinate system.
- the unit normal vector of the projection surface in the world coordinate system can be expressed as:
- the controller can also obtain the unit normal vector of the projection surface in the optical-mechanical coordinate system according to the projection surface equation of the projection surface in the optical-mechanical coordinate system, and the unit normal vector of the projection surface in the optical-mechanical coordinate system can be expressed as the formula:
- R1 represents the conversion matrix between the optical machine coordinate system and the world coordinate system.
- the conversion matrix can represent the mapping relationship between the playback content of the optical machine projection and the projection surface.
- the controller can realize the conversion of the coordinates of a certain point between the world coordinate system and the optical-mechanical coordinate system. For a certain target area already determined in the projection plane, its coordinate representation in the world coordinate system can be directly determined.
- the controller can convert the coordinate representation in the world coordinate system into the coordinate representation in the optical-mechanical coordinate system according to the transformation matrix, so as to determine the position information of the target area for the projection device, so as to directly project the playback content to the target disk In the area, image correction can be realized.
- the controller in order to improve the accuracy of image correction, when the image correction instruction is acquired, can project a preset correction chart to the projection surface, and the correction chart can include several user-set a calibration area.
- Figure 12 is a schematic diagram of a calibration chart in some embodiments. There are three types of calibration areas in the calibration chart: the first calibration area, the second calibration area and the third calibration area.
- the first calibration area is located in the middle area of the calibration chart, such as area A in the figure.
- the first calibration area can be a checkerboard pattern or a ring pattern.
- the second calibration area is located between the first calibration area and the third calibration area, such as the four B areas in the figure, including four calibration areas B1-B4.
- the area of each second calibration area may be the same, and may be smaller than the area of the first calibration area.
- Each second correction area includes a pattern preset by the user, and the pattern includes at least three correction feature points, wherein, the patterns of each second correction area can be the same or different, and the implementation of the present application Examples are not limited.
- the third calibration area is located in the edge area of the calibration chart, as shown in the 16 C areas in the figure, including 16 calibration areas C1-C16.
- the area of each third correction area may be the same and smaller than the area of the second correction area.
- Each third calibration area includes at least three calibration feature points, which may include a pattern preset by the user, wherein the patterns of each third calibration area may be the same or different.
- the distortion of the first correction area is the weakest, so the projection surface equation determined according to the correction feature points in the first correction area Accuracy is the highest.
- the distortion of the second calibration area is inferior to that of the first calibration area, so the accuracy of the projection surface equation determined by the calibration feature points in the second calibration area is also lower than that of the first calibration area.
- the distortion in the third calibration area is the strongest, so the accuracy of the projection surface equation determined according to the calibration feature points in the third calibration area is also the lowest.
- the controller can select the corrected feature points in the corrected image as follows:
- the controller may first identify the first correction region in the correction image to obtain position information of at least three correction feature points in the first correction region.
- the projection surface equation can be determined according to the position information of at least three calibration feature points.
- the optical machine when the optical machine is projecting the calibration chart, some areas may be blocked due to environmental reasons and cannot be displayed on the projection surface.
- the first calibration area may be blocked, resulting in the correction image captured by the camera.
- the absence of the first correction area affects subsequent acquisition of projection surface equations. Therefore, if the controller does not recognize the first correction region, other correction regions can be recognized.
- the second correction area can be identified preferentially.
- the controller may identify the second correction area in the correction image to obtain position information of at least three correction feature points in the second correction area. Since each second calibration area contains at least three calibration feature points, any second calibration area can be identified to obtain the position information of at least three calibration feature points, or all the second calibration areas can be Identifying and obtaining position information of at least three calibration feature points.
- a third correction area may be identified. Specifically, the controller may identify the third correction region in the correction image to obtain position information of at least three correction feature points in the third correction region. Any one of the third calibration areas may be identified, or all of the third calibration areas may be identified to obtain position information of at least three calibration feature points.
- the projection surface equation can be determined according to the position information of at least three calibration feature points.
- the controller identifies each type of calibration area separately, that is, at least three identified calibration feature points all belong to the same type of calibration area.
- the controller may also identify all the correction areas in the correction image, so as to randomly identify the position information of at least three correction feature points in all the correction areas. These calibration feature points may belong to the same type of calibration area, or may belong to different types of calibration areas.
- multiple recognition modes may be set for the projection device.
- three recognition modes can be set.
- the controller only recognizes the first correction area, and uses the correction feature points in the first correction area to obtain projection plane equations for image correction.
- the accuracy of image correction is the highest, but compared with the entire corrected image, the number of corrected feature points in the first corrected area is small, so the adaptability of identifying the corrected feature points in the first corrected area is relatively small. Low. Once the first correction area is blocked, the correction feature points cannot be identified.
- the controller can identify the first correction area and the second correction area, and use the correction feature points in the two correction areas to perform image correction. Compared with the first recognition mode, the accuracy of image correction is lower, but the adaptability of the recognized correction feature points is higher. When the first correction area is blocked, the second correction area can be used to identify the correction feature points.
- the controller can identify all the calibration areas, so as to identify the calibration feature points in all the calibration areas. Compared with the first recognition mode and the second recognition mode, the accuracy of image correction is lower, but the adaptability of the recognized correction feature points is higher. Even if both the first calibration area and the second calibration area are blocked, the third calibration area can be used to identify calibration feature points.
- the controller can use the projection relationship to determine the target position information of the area to be projected on the projection plane.
- the target position information in the embodiment of the present application refers to the position information of the area to be projected in the optical-mechanical coordinate system.
- the controller may first determine the position information of the area to be projected on the projection surface in the world coordinate system. Then according to the transformation matrix, the position information in the world coordinate system is converted into the position information in the optical machine coordinate system.
- the controller can control the optical machine to project the playback content to the target position information, that is, to project into the area to be projected.
- the controller when acquiring the target position information of the area to be projected, the controller may first determine the specific area to be projected.
- the area to be projected may be a preset projected area. Specifically, it can be operated by professional after-sales technicians, or it can be a fixed projection area set by the user. By setting the projection device at the position where the best projection effect can be achieved during work, the display projected on the projection surface area.
- the position information of the preset projection area in the projection plane is definite, so the position information of the projection area in the projection plane, that is, in the image coordinate system, can be directly determined.
- the projection area is generally set as a rectangular area, and the position information of the rectangular area can be represented by the coordinates of the four vertices of the rectangular area. Set the coordinates of a vertex in the rectangular area to (x, y). Since the projection plane is the XOY plane of the world coordinate system, the origin is the center point of the preset projection area. Therefore, the coordinates of this vertex are expressed as (x, y, z) in the world coordinate system.
- the controller can convert the coordinate representation in the world coordinate system to the coordinate representation in the optical machine coordinate system according to the transformation matrix, so the vertex coordinates are expressed as (x', y', z) in the world coordinate system.
- the controller can further obtain the coordinates of all vertices of the preset projection area in the optical-mechanical coordinate system, so as to determine the position information of the projection area in the optical-mechanical coordinate system.
- the control can control the optical machine to project the playback content to the position information to achieve image correction.
- the controller when determining the area to be projected, may obtain the largest inscribed rectangle of the corrected image corresponding to the calibration chart, and determine the largest inscribed rectangle as the area to be projected.
- the size ratio (aspect ratio) of the largest inscribed rectangle may be the same as the size ratio of the calibration chart.
- the size of the rectangle is continuously corrected according to the size ratio of the correction chart until all four vertices of the rectangle are located on the edge of the display area of the correction chart, thereby obtaining the largest inscribed rectangle.
- the controller can obtain the position information of the largest inscribed rectangle in the optical-mechanical coordinate system, and control the optical machine to project the playing content into the largest inscribed rectangle, so as to realize image correction.
- Fig. 13 is a schematic diagram of before and after image correction in some embodiments.
- the projection device when the projection device projects the playback content onto the projection surface, if there is an obstacle between the projection device and the projection surface, part of the playback content cannot be projected onto the projection surface. Therefore, the projection device can perform obstacle avoidance processing, avoid obstacles, determine an accurate projection area, and ensure that the playback content projected into the projection area is complete.
- the projection device has an obstacle avoidance function, and the user can choose to enable or disable this function.
- the projection device turns off the obstacle avoidance function, the projection device will not perform obstacle avoidance processing.
- the controller can control the optical machine to project the obstacle avoidance map to the projection surface, and control the camera to shoot the obstacle avoidance map to obtain the obstacle avoidance image.
- the obstacle avoidance picture card may be a pure white picture card.
- the controller when it is detected that the projection device activates the obstacle avoidance function, the controller may perform keystone correction and obstacle avoidance processing simultaneously.
- the controller can control the optical machine to first project the obstacle avoidance map to the projection surface and control the camera to shoot the obstacle avoidance map, save the obstacle avoidance image, and then perform obstacle avoidance processing according to the obstacle avoidance image. After taking the obstacle avoidance chart, the controller can control the optical machine to project the correction chart to the projection surface and control the camera to shoot the correction chart, so as to perform trapezoidal correction processing according to the corrected image.
- the obstacle avoidance image may be preprocessed to ensure that only the display area corresponding to the obstacle avoidance map is retained in the obstacle avoidance image.
- the controller can further perform obstacle avoidance processing according to the obstacle avoidance image to determine the area to be projected on the projection surface.
- binarization when performing obstacle avoidance processing on the projection device, for the captured image, when there is an obstacle inside, binarization is generally used to extract the outline of the obstacle to realize obstacle avoidance.
- binarization is a fixed threshold. When the user scene changes (such as day to night), the brightness of light and the gray value of obstacles will change. This fixed threshold is obviously not suitable for all scenes, so it cannot be accurate. to achieve obstacle avoidance.
- the controller may perform HSV (Hue, Saturation, Value) preprocessing on the obstacle avoidance images.
- HSV Hue, Saturation, Value
- the obstacle avoidance image can be transformed into the HSV space, and the HSV space defines the HSV values of various colors, so the HSV values of all pixels in the obstacle avoidance image can be determined.
- Figure 14 is a graph of HSV values for each color in some embodiments. It can be seen that for white, the HSV value is:
- the HSV value of the pixels in it should meet the white HSV value range.
- the controller can detect whether the HSV value of each pixel in the obstacle avoidance image meets the white HSV value.
- the controller can preset an HSV threshold, which can be slightly larger than the range of white HSV values, for example: 0 ⁇ H ⁇ 180, 0 ⁇ S ⁇ 50, 200 ⁇ V ⁇ 255.
- the controller can detect whether the HSV value of each pixel in the obstacle avoidance image meets the preset HSV threshold. Pixels that do not meet the conditions are set as the first type of pixels, and pixels that meet the conditions are set as the second type of pixels.
- the controller can detect whether the number of pixels of the first type satisfies a preset first number condition.
- the first number condition may be set as: the number of pixels of the first type does not exceed half of the total number of pixels.
- the controller may set the gray values of all the pixels of the first type to a preset value, and the preset value may be 0. If the condition is not satisfied, that is, the number of pixels of the first type is greater than half of the total number, no processing is performed.
- the above method is used to judge the objects outside the HSV threshold range. number of pixels to avoid this special case.
- the controller may further perform obstacle avoidance processing on the obstacle avoidance image.
- the controller can perform grayscale processing on the obstacle avoidance image, and count the number of pixels corresponding to each grayscale value.
- the gray value curve can be obtained, as shown in Figure 15.
- the gray value curve is used to characterize the relationship between the gray value and the number of pixels. Among them, the X-axis represents the gray value, and the Y-axis represents the number of pixels.
- the controller may first determine the gray value with the largest number of pixels and set it as the first gray value, such as point A in the figure. Take the first gray value as the center, and search for a nearest minimum gray value on both sides of the first gray value. It can be seen that point B in the figure is the minimum value point on the left, and point B is the minimum value point on the right side. Thus, two minimum grayscale values corresponding to point B and point C are determined.
- the controller can count the gray value interval between the two minimum gray value values.
- the controller may detect the minimum gray value according to preset conditions.
- the preset condition is: the number of pixels with the minimum gray value is smaller than the number of pixels with the first gray value in a preset ratio.
- the preset ratio can be set by yourself, such as 2/3. Therefore, the preset condition may be: the number of pixels with the minimum gray value is less than 2/3 of the number of pixels with the first gray value.
- the minimum gray value When it is detected that the minimum gray value satisfies the preset condition, the minimum gray value may be retained. If the preset condition is not met, the minimum value of the gray value is not used, but along the direction of the minimum value of the gray value, the next nearest minimum value of the gray value is searched until the preset condition is found. The minimum value of the gray value of the condition.
- Figure 16 is a schematic diagram of a grayscale value curve in some embodiments.
- the point P A is the gray value point with the largest number of pixels, that is, the first gray value.
- P B1 is the closest to PA, but the number of pixels in P B1 is obviously larger than 2/3 of the number of pixels in PA , so P B1 cannot be used as the minimum value of the gray value, and it is necessary to continue to search to the left.
- Point P B2 meets the conditions, so P B2 can be used as the minimum gray value.
- On the right side of the point P A there is a minimum value point P C , which meets the preset conditions and can be used as the minimum value of the gray value. Therefore, point P B2 and point P C correspond to two minimum gray values.
- the gray-scale value interval between the two gray-scale minimum values can be further determined, and the gray-scale value The area corresponding to all the pixels in the degree value interval is determined as the projection area.
- the controller may acquire the largest inscribed rectangle in the projection area, and use the largest inscribed rectangle as the area to be projected.
- the number of pixels in the grayscale value interval corresponding to the first grayscale value is very small, resulting in a relatively small area to be projected. Small.
- the controller can detect whether the number of all pixels in the first grayscale value interval satisfies the preset
- the second quantity condition of may be set as: the number of all pixels in the first gray value interval is less than the total number of pixels of a preset ratio.
- the preset ratio can be 1/30. That is, the second number condition can be set as: the number of all pixels in the first gray value interval is less than 1/30 of the total number of pixels.
- the area corresponding to all the pixels in the first gray value interval can still be used as the projection area, and the area to be projected can be further acquired.
- the controller may respectively obtain the second gray value and the third gray value with the largest number of pixels on both sides of the first gray value interval.
- the gray value is generally 0-255.
- Set the first gray value interval as (a, b).
- the entire gray value situation can be divided into three intervals: (0, a), (a, b), (b, 255).
- the gray value with the most pixels is the first gray value.
- the gray value with the largest number of pixels is the second gray value.
- the gray value with the largest number of pixels is the third gray value.
- the second grayscale value interval corresponding to the second grayscale value and the third grayscale value interval corresponding to the third grayscale value can be obtained.
- Figure 17 is a schematic diagram of a grayscale value curve in some embodiments.
- the point P A1 is the gray value point with the largest number of pixels, that is, the first gray value.
- the minimum gray value points corresponding to point P A are point P B1 and point P C1 .
- the minimum gray value points corresponding to point P A2 are point P B2 and point P C2 , that is, the interval between point P B2 and point P C2 is the second gray value interval.
- the minimum gray value points corresponding to point P A3 are point P B3 and point P C3 , that is, the interval between point P B3 and point P C3 is the third gray value interval.
- the controller can count the number of pixels in the second grayscale value interval and the third grayscale value interval. At the same time, among the three intervals of the first grayscale value interval, the second grayscale value interval and the third grayscale value interval, the grayscale value interval with the largest number of pixels is determined as the target grayscale value interval. The area formed by all the pixels corresponding to the target gray value range can be determined as the projection area. The controller further determines the area to be projected according to the projection area.
- position information of the area to be projected in the world coordinate system may be acquired.
- the controller can convert the position information in the world coordinate system into the position information in the optical machine coordinate system to obtain the target position information.
- the controller can control the optical machine to project the playing content to the area to be projected, so as to realize image correction.
- the embodiment of the present application also provides an image correction method applied to a projection device, as shown in FIG. 18 , the method includes:
- Step 1801 in response to the image correction instruction, control the optical machine to project the calibration chart onto the projection surface, and control the camera to photograph the calibration chart to obtain the calibration image.
- Calibration charts contain calibration feature points.
- Step 1802 Determine the first position of the corrected feature point in the corrected image, and obtain the projection relationship according to the first position; the projection relationship refers to the mapping relationship between the playback content of the optical machine projection and the projection surface.
- Step 1803 Determine the target position information of the area to be projected on the projection surface according to the projection relationship.
- Step 1804 according to the target location information, control the optical machine to project the playback content to the area to be projected.
- FIG. 19A is a schematic diagram of a signaling interaction sequence of a projection device implementing a radioactive eye function according to another embodiment of the present application.
- the projection device provided by the present application can realize the anti-eye function.
- the controller can control the user interface to display corresponding prompt information to remind the user to leave In the current area, the controller can also control the user interface to reduce the display brightness, so as to prevent the laser from causing damage to the user's eyesight.
- the controller when the projection device is configured as a children's viewing mode, the controller will automatically turn on the anti-eye switch.
- the controller controls the projection device to turn on the anti-eye switch.
- the controller when the data collected by time-of-flight (TOF) sensors, camera devices and other devices triggers any preset threshold condition, the controller will control the user interface to reduce the display brightness, display prompt information, and reduce the optical-mechanical transmission power. , brightness, intensity, in order to protect the user's eyesight.
- TOF time-of-flight
- the projection device controller can control the calibration service to send signaling to the time-of-flight sensor to query the current device status of the projection device, and then the controller receives data feedback from the time-of-flight sensor.
- the correction service can send a notification algorithm service to the process communication framework (HSP Core) to start the anti-eye process signaling;
- the process communication framework (HSP Core) will call the service capability from the algorithm library to call the corresponding algorithm service, for example, it can include taking pictures Detection algorithm, screenshot algorithm, foreign object detection algorithm, etc.;
- the process communication framework returns the foreign object detection result to the correction service based on the above algorithm service; for the returned result, if the preset threshold condition is reached, the controller will control the user interface to display prompt information and reduce the display brightness.
- the signaling sequence is shown in the figure 19A.
- the projection device when the anti-eye switch of the projection device is turned on, when the user enters a preset specific area, the projection device will automatically reduce the intensity of the laser light emitted by the optical machine, reduce the display brightness of the user interface, and display safety prompt information.
- the control of the projection device on the above-mentioned anti-eye function can be realized by the following methods:
- the controller Based on the projection screen acquired by the camera, the controller uses an edge detection algorithm to identify the projection area of the projection device; when the projection area is displayed as a rectangle or a rectangle, the controller obtains the coordinate values of the four vertices of the above-mentioned rectangular projection area through a preset algorithm;
- the perspective transformation method can be used to correct the projection area to be a rectangle, and the difference between the rectangle and the projection screenshot can be calculated to realize whether there are foreign objects in the display area; if the judgment result is that there are foreign objects, the projection device Automatically trigger the anti-eye function to start.
- the difference between the camera content of the current frame and the camera content of the previous frame can be used to determine whether foreign objects have entered the area outside the projection range; if it is judged that foreign objects have entered, the projection
- the device automatically triggers the anti-eye function.
- the projection device can also use a time-of-flight (ToF) camera or a time-of-flight sensor to detect real-time depth changes in a specific area; if the depth value changes beyond a preset threshold, the projection device will automatically trigger the anti-eye function.
- TOF time-of-flight
- the projection device judges whether to enable the anti-eye function based on the collected time-of-flight data, screenshot data, and camera data analysis.
- the controller performs depth difference analysis; if the depth difference is greater than the preset threshold X, when the preset threshold X is implemented as 0, it can be determined that there is a foreign object in a specific area of the projection device . If the user is located in the specific area and his vision is at risk of being damaged by the laser, the projection device will automatically activate the anti-eye function to reduce the intensity of the laser light emitted by the light machine, reduce the display brightness of the user interface, and display safety reminder information.
- the projection device performs color addition mode (RGB) difference analysis based on the captured screenshot data, and if the color addition mode difference is greater than the preset threshold Y, it can be determined that there is a foreign object in a specific area of the projection device; If there are users in a specific area, their eyesight is at risk of being damaged by the laser, and the projection device will automatically activate the anti-eye function, reduce the intensity of the emitted laser light, reduce the brightness of the user interface display, and display the corresponding safety reminder information.
- RGB color addition mode
- the projection device obtains the projection coordinates according to the collected camera data, then determines the projection area of the projection device according to the projection coordinates, and further analyzes the difference of the color addition mode (RGB) in the projection area, if the difference of the color addition mode is greater than
- the preset threshold Y can determine that there is a foreign object in a specific area of the projection device. If there is a user in the specific area, his vision may be damaged by the laser, and the projection device will automatically activate the anti-eye function to reduce the intensity of the emitted laser light. 1. Reduce the display brightness of the user interface and display the corresponding security prompt information.
- the controller can still perform color additive mode (RGB) difference analysis in the extended area; if the color additive mode difference is greater than the preset threshold Y, it can be determined that there is a foreign object in the projection device If there is a user in the specific area, his vision may be damaged by the laser light emitted by the projection device.
- the projection device will automatically activate the anti-eye function, reduce the intensity of the emitted laser light, reduce the brightness of the user interface display, and display the corresponding security information.
- the prompt information is as shown in Figure 19F.
- FIG. 19B is a schematic diagram of a signaling interaction sequence of a projection device implementing a display image correction function according to another embodiment of the present application.
- the projection device can monitor the movement of the device through a gyroscope or a gyroscope sensor.
- the correction service sends a signaling to the gyroscope to query the status of the device, and receives a signaling from the gyroscope to determine whether the device is moving.
- the display correction strategy of the projection device can be configured such that when the gyroscope and the time-of-flight sensor change simultaneously, the projection device triggers keystone correction first; after the gyroscope data stabilizes for a preset length of time, the controller starts the trigger Keystone correction; and the controller can also configure the projection device not to respond to the commands sent by the remote control buttons when the keystone correction is in progress; in order to cooperate with the realization of the keystone correction, the projection device will display a pure white image card.
- the trapezoidal correction algorithm can construct the transformation matrix between the projection surface and the optical-mechanical coordinate system in the world coordinate system based on the binocular camera; further combine the optical-mechanical internal parameters to calculate the homography between the projection screen and the playing card, and use the homography to realize Arbitrary shape conversion between the projected screen and the playing card.
- the correction service sends a signaling for informing the algorithm service to start the keystone correction process to the process communication framework (HSP CORE), and the process communication framework further sends a service capability call signaling to the algorithm service to obtain the capability corresponding algorithm;
- the algorithm service obtains and executes the camera and picture algorithm processing service and the obstacle avoidance algorithm service, and sends them to the process communication framework in the form of signaling; in some embodiments, the process communication framework executes the above algorithms and feeds back the execution results to the Calibration service, the execution results may include successful photographing and successful obstacle avoidance.
- the user interface will be controlled to display an error return prompt, and the user interface will be controlled to display keystone correction and auto focus charts again.
- the projection device can identify the screen; and use the projection changes to correct the projection screen to be displayed inside the screen, so as to achieve the effect of aligning with the edge of the screen.
- the projection device can use the time-of-flight (ToF) sensor to obtain the distance between the optical machine and the projection surface, based on the distance, find the best image distance in the preset mapping table, and use the image algorithm to evaluate the clarity of the projection screen. Based on this, the image distance can be fine-tuned.
- ToF time-of-flight
- the automatic keystone correction signaling sent by the correction service to the process communication framework may include other function configuration instructions, for example, it may include control instructions such as whether to implement synchronous obstacle avoidance, whether to enter a scene, and so on.
- the process communication framework sends the service capability call signaling to the algorithm service, so that the algorithm service acquires and executes the auto-focus algorithm to realize the adjustment of the line-of-sight between the device and the screen; in some embodiments, after applying the auto-focus algorithm to realize the corresponding function, the algorithm
- the service may also obtain and execute an automatic entry algorithm, which may include a keystone correction algorithm.
- the projection device automatically enters the screen, and the algorithm service can set the 8-position coordinates between the projection device and the screen; and then through the autofocus algorithm again, the adjustment of the viewing distance between the projection device and the screen is realized; finally, the correction result Feedback to the correction service, and control the user interface to display the correction result, as shown in Figure 19B.
- the projection device uses an autofocus algorithm to obtain the current object distance by using its configured laser ranging to calculate the initial focal length and search range; then the projection device drives the camera (Camera) to take pictures, and uses the corresponding algorithm Perform clarity evaluation.
- the projection device searches for the best possible focal length based on the search algorithm, then repeats the above steps of photographing and sharpness evaluation, and finally finds the optimal focal length through sharpness comparison to complete autofocus.
- step 19C01 the projection device is started; in step 19C02, the user moves the device, and the projection device automatically completes the calibration and refocuses; in step 19C03, the controller will detect whether the auto focus function is enabled; when the auto focus function is not enabled, the controller will end Auto-focus business; step 19C04, when the auto-focus function is turned on, the projection device will obtain the detection distance of the time-of-flight (TOF) sensor through the middleware for calculation;
- TOF time-of-flight
- step 19C05 the controller queries the preset mapping table according to the obtained distance to obtain the approximate focal length of the projection device; in step 19C06, the middleware sets the acquired focal length to the optical engine of the projection device;
- step 19C07 after the optical machine emits laser light with the above focal length, the camera will execute the photographing instruction; in step 19C08, the controller judges whether the projection device is focused according to the obtained photographing result and evaluation function; if the judgment result meets the preset completion conditions, then Control the autofocus process to end; step 19C09, if the judgment result does not meet the preset completion conditions, the middleware will fine-tune the focal length parameters of the projection device’s optics. Optical mechanics; thereby realizing the steps of repeated photographing and sharpness evaluation, and finally finding the optimal focal length through sharpness comparison to complete autofocus, as shown in FIG. 19C .
- the projection device provided by the present application can implement a display correction function through a keystone correction algorithm.
- two sets of external parameters between the two cameras and between the camera and the optical machine can be obtained, that is, the rotation and translation matrices; then the specific checkerboard chart is played through the optical machine of the projection device, and the projected checkerboard angle is calculated
- Point depth value for example, solve the xyz coordinate value through the translation relationship between binocular cameras and the principle of similar triangles; then fit the projection surface based on the xyz, and obtain the rotation relationship and translation relationship with the camera coordinate system , which can specifically include pitch relationship (Pitch) and yaw relationship (Yaw).
- the Roll parameter value can be obtained through the gyroscope configured by the projection device to combine the complete rotation matrix, and finally calculate the external parameters from the projection plane to the optical-mechanical coordinate system in the world coordinate system.
- Step 19D01 the projection device controller obtains the depth value of the point corresponding to the pixel point of the photo, or the coordinates of the projection point in the camera coordinate system;
- Step 19D02 through the depth value, the middleware obtains the relationship between the optical machine coordinate system and the camera coordinate system;
- Step 19D03 the controller calculates the coordinate value of the projection point in the optical-mechanical coordinate system
- Step 19D04 obtains the angle between the projection surface and the optical-mechanical plane based on the coordinate value fitting plane;
- Step 19D05 obtain the corresponding coordinates of the projection point in the world coordinate system of the projection surface according to the angle relationship;
- a homography matrix can be calculated according to the coordinates of the chart in the optical-mechanical coordinate system and the coordinates of the corresponding points on the projection surface of the projection plane.
- Step 19D07 the controller judges whether an obstacle exists based on the above acquired data
- Step 19D08 when obstacles exist, randomly select rectangular coordinates on the projection surface in the world coordinate system, and calculate the area to be projected by the optical machine according to the homography relationship;
- Step 19D09 when the obstacle does not exist, the controller can obtain the feature points of the two-dimensional code, for example;
- Step 19D10 obtaining the coordinates of the two-dimensional code on the prefabricated map card
- Step 19D11 obtaining the homography relationship between the camera photo and the drawing card
- step 19D12 transform the obtained coordinates of the obstacle into the chart, so as to obtain the coordinates of the chart chart blocked by the obstacle.
- Step 19D13 according to the coordinates of the occlusion area of the obstacle map in the optical machine coordinate system, the coordinates of the occlusion area of the projection surface are obtained through homography matrix transformation;
- step 19D14 randomly select rectangular coordinates on the projection surface in the world coordinate system, avoid obstacles at the same time, and calculate the area to be projected by the optical machine according to the homography relationship.
- the obstacle avoidance algorithm uses the algorithm (OpenCV) library to complete the contour extraction of foreign objects when selecting the rectangle step in the trapezoidal correction algorithm process, and avoids the obstacle when selecting the rectangle to realize the projection obstacle avoidance function.
- OpenCV algorithm
- Step 19E01 the middleware obtains the QR code image card captured by the camera
- Step 19E02 identifying the feature points of the two-dimensional code, and obtaining the coordinates in the camera coordinate system
- step 19E03 the controller further acquires the coordinates of the preset image card in the optical-mechanical coordinate system
- Step 19E04 solving the homography relationship between the camera plane and the optical-mechanical plane
- Step 19E05 based on the above-mentioned homography, the controller identifies the coordinates of the four vertices of the curtain captured by the camera;
- Step 19E06 according to the homography matrix, obtain the range of the chart to be projected by the screen light machine.
- the screen entry algorithm is based on the algorithm library (OpenCV), which can identify and extract the largest black closed rectangle outline, and judge whether it is a 16:9 size; project a specific picture card and use a camera to take photos, and extract more details in the photos.
- OpenCV algorithm library
- the corner points are used to calculate the homography between the projection surface (curtain) and the optical-mechanical display card, and the four vertices of the screen are converted to the optical-mechanical pixel coordinate system through homography, and the optical-mechanical graphic card is converted to the four vertices of the screen.
- OpenCV algorithm library
- the telephoto micro-projection TV has the characteristics of flexible movement, and the projection screen may be distorted after each displacement.
- the projection equipment provided by this application and the geometric correction-based The display control method can automatically complete the correction for the above problems, including the realization of functions such as automatic keystone correction, automatic screen entry, automatic obstacle avoidance, automatic focus, and anti-eye.
- FIG. 20 is a schematic diagram of a lens structure of a projection device in another embodiment of the present application.
- the lens 300 of the projection device may further include an optical assembly 310 and a driving motor 320 .
- the optical component 310 is a lens group composed of one or more lenses, which can refract the light emitted by the optical machine 200 so that the light emitted by the optical machine 200 can be transmitted to the projection surface 400 to form a transmitted content image.
- the optical assembly 310 may include a lens barrel and a plurality of lenses disposed in the lens barrel. According to whether the position of the lens can be moved, the lens in the optical assembly 310 can be divided into a movable lens 311 and a fixed lens 312, by changing the position of the movable lens 311, adjusting the distance between the movable lens 311 and the fixed lens 312, changing the overall optical assembly 310 focal length. Therefore, the driving motor 320 can drive the moving lens 311 to move its position by connecting with the moving lens 311 in the optical assembly 310 to realize the auto-focus function.
- the focusing process described in some embodiments of the present application refers to changing the position of the moving lens 311 by driving the motor 320, thereby adjusting the distance between the moving lens 311 and the fixed lens 312, that is, adjusting the position of the image plane , so the imaging principle of the lens combination in the optical assembly 310, the adjustment of the focal length is actually the adjustment of the image distance, but in terms of the overall structure of the optical assembly 310, adjusting the position of the moving lens 311 is equivalent to adjusting the overall focal length adjustment of the optical assembly 310 .
- the lens of the projection device needs to be adjusted to different focal lengths so as to transmit a clear image on the projection surface 400 .
- the distance between the projection device and the projection surface 400 will require different focal lengths depending on the placement of the user. Therefore, in order to adapt to different usage scenarios, the projection device needs to adjust the focal length of the optical assembly 310 .
- Fig. 21 is a schematic structural diagram of a distance sensor and a camera in some embodiments.
- the projection device can also have a built-in or external camera 700 , and the camera 700 can take images of images projected by the projection device to obtain projection content images.
- the projection device checks the definition of the projected content image to determine whether the current lens focal length is appropriate, and adjusts the focal length if it is not appropriate.
- the projection device can continuously adjust the lens position and take pictures, and find the focus position by comparing the clarity of the front and rear position pictures, so as to adjust the moving lens 311 in the optical assembly to Suitable location.
- the controller 500 may first control the driving motor 320 to gradually move the moving lens 311 from the focus start position to the focus end position, and continuously obtain projected content images through the camera 700 during this period. Then, by performing definition detection on multiple projected content images, the position with the highest definition is determined, and finally the driving motor 320 is controlled to adjust the moving lens 311 from the focusing terminal to the position with the highest definition, and automatic focusing is completed.
- the controller of the projection device may respond to the auto-focusing instruction and acquire the separation distance through the distance sensor 600 .
- the distance sensor 600 may be a sensor device based on the time of flight (Time of Flight, TOF) principle, such as laser radar and infrared radar capable of detecting the target distance.
- the distance sensor 600 can be set at the position of the optical machine 200, including the signal transmitting end and the receiving end.
- the transmitting end of the distance sensor 600 can transmit a wireless signal to the direction of the projection surface 400. After the wireless signal touches the projection surface 400, it will be reflected back to the receiving end of the distance sensor 600, thereby sending a signal according to the transmitting end.
- the flight time of the signal can be calculated based on the time of the receiving end and the time of receiving the signal at the receiving end, combined with the flight speed, the actual flight distance of the wireless signal can be obtained, and then the distance between the projection surface 400 and the optical machine 200 can be calculated.
- FIG. 22 is a schematic diagram of image correction triggered by a projection device in an embodiment of the present application.
- the controller obtains the distance data at the time of power-on and the distance data at the time of historical power-off.
- the power-on time is the start time of the current working process of the projection device
- the historical power-off time is the end time of the last working process of the projection device. Calculate the distance change data of the power-on time relative to the historical power-off time.
- the controller automatically triggers the image correction process.
- the power-on time is the start time of the current working process of the projection device
- the historical power-off time is the end time of the last working process of the projection device. That is to say, the historical power-off time is the most recent power-off time before the current power-on time. In this way, by comparing the distance data between the current power-on time and the latest power-off time before the current power-on time. Make sure that the power on and off after the projection device has not moved does not automatically trigger the image correction process.
- the controller is also configured to: parse the power-on distance value in the distance data at the power-on time, and the distance data at the historical power-off time Parse the historical shutdown distance value in . If the power-on distance value and the historical power-off distance value are within the effective distance range, the process of calculating distance change data is performed; the effective distance range is set according to the setting positions of the distance sensor 600 and the projection surface 400 .
- the effective distance range is 0-3.5 meters.
- the power-on distance value is 3 meters
- the historical power-off distance value is 2.4 meters.
- the power-on distance value of 3 meters is within the effective distance range, it means that the power-on distance value is valid.
- the historical shutdown distance value of 2.4 meters is also within the valid distance range, indicating that the historical shutdown distance value is valid.
- the controller will re-trigger the process of acquiring distance data.
- the controller can perform the process of obtaining multiple sets of distance data at different times, for example, real-time acquisition of the distance data at the current power-on time, and the distance data before the historical power-off time and/or the latest completion of the historical power-off time Distance data at the moment of image correction. Furthermore, based on multiple sets of distance data, it is convenient to more accurately determine whether the projection device has actually changed its position.
- the controller in the step of calculating the distance change data, is further configured to: make a difference between the historical power-off distance value and the power-on distance value, and extract the absolute value of the difference to generate a distance difference value; calculate the distance difference value to the power-on distance value; if the ratio is greater than the preset ratio threshold, the subsequent process of projecting the calibration card to the projection surface 400 is performed.
- the preset ratio threshold is 0.05.
- the difference is 0.6 meters.
- the ratio of 0.5 is greater than the preset ratio threshold of 0.05, it indicates that the position of the projection device at the current power-on time has changed relative to the position of the projection device at the historical power-off time, and the controller automatically triggers the graphics correction process.
- the embodiment of the present application does not set numerical limitations on the preset ratio threshold, and those skilled in the art can determine the value of the preset acceleration threshold according to actual needs, such as 0.01, 0.02, 0.03, etc., these designs are not Beyond the scope of protection of the embodiments of the present application.
- the controller is configured to: acquire the distance data at the current moment, and parse the current distance value in the distance data. If the current distance value is within the valid distance range, the data label corresponding to the distance data at the current moment is assigned as the first state identifier, and the first state identifier is used to represent the effective distance of the current distance value. If the current distance value is not within the valid distance range, the data label corresponding to the distance data at the current moment is assigned as the second status identifier, and the second status identifier is used to indicate that the current distance value is an invalid distance.
- the controller can acquire distance data in real time and assign values to data labels corresponding to the distance data.
- the embodiment of the present application does not limit the time of assigning data tags.
- the projection device can assign data tags when it is turned on and automatically completes image correction, or it can also assign data tags when it is turned on and receives image correction instructions input by users. In this way, the controller can directly determine whether the distance data is within the valid distance range according to the value corresponding to the data label.
- the state of the projection device needs to be guaranteed to be in a static state. Therefore, through the acceleration sensor configured on the projection device, the controller receives monitoring data from the acceleration sensor to determine whether the projection device moves. After determining that the projection device is moved, the controller can generate an image correction command and enable the image correction function.
- the projection device acquires acceleration data in real time based on an acceleration sensor (gyroscope sensor), and determines whether the projection device moves according to changes in the acceleration data. If the projection device is moved, the image correction function is only activated when the projection device is in a static state.
- an acceleration sensor gyroscope sensor
- the controller is configured to: obtain the acceleration data at the start-up time and the acceleration data at the historical shutdown time; calculate the acceleration change data according to the acceleration data at the start-up time and the acceleration data at the historical shutdown time; if the acceleration change data is less than the preset If the acceleration threshold is set, the step of projecting the calibration card to the projection surface 400 is performed.
- the power-on time is the start time of the current working process of the projection device
- the historical power-off time is the end time of the last working process of the projection device.
- the controller is configured to: analyze the movement angle of the projection device in the coordinate directions of the three axes from the acceleration change data; If the movement angles are all less than the preset acceleration threshold, it is determined that the acceleration change data is less than the preset acceleration threshold.
- the controller monitors in real time and receives the acceleration data collected by the gyroscope sensor, that is, the acceleration sensor, at the time of power-on and historical power-off time.
- the acceleration data includes acceleration data on three axis coordinates (X, Y and Z axes). Since the acceleration data represent the acceleration data collected in the coordinate directions of the three axes, when the acceleration data on one of the axes changes, it means that the projection device is displaced in the coordinate direction of the axis. Therefore, whether the projection device moves can be determined by judging the acceleration data in the directions of the three axes. Next, convert the acceleration data in the coordinate directions of the three axes into angular velocity data, and determine the movement angle of the projection device in the coordinate directions of the three axes through the angular velocity data.
- the controller calculates the movement angle of the projection device in the coordinate directions of the three axes by acquiring the acceleration data collected at the power-on time and the acceleration data collected at the historical power-off time, and the acceleration change data at the power-on time relative to the historical power-off time. If the movement angles in the coordinate directions of the three axes are all less than the preset acceleration threshold, it is determined that the projection device is in a static state, and the optical machine 200 can be controlled to project the calibration chart onto the projection surface 400 to trigger the image calibration process.
- the embodiment of the present application does not set a numerical limit on the preset acceleration threshold, and those skilled in the art can determine the value of the preset acceleration threshold according to actual needs, such as 0.1, 0.2, 0.3, etc., these designs are not Beyond the scope of protection of the embodiments of the present application.
- the acceleration data is collected by the gyroscope sensor in real time and sent to the controller. Furthermore, the process of acquiring multiple sets of acceleration data at different times may be performed multiple times, so as to more accurately determine whether the projection device is truly in a static state.
- the embodiment of this application does not specifically limit the number of execution times of the acquisition and judgment process, and can be set multiple times according to the device status corresponding to the specific projection device, and all of them are within the protection scope of the embodiment of the application.
- the process of image correction specifically includes: the controller controls the optical machine 200 to project the calibration card onto the projection surface 400 .
- the area to be projected on the projection surface 400 is determined according to the image of the calibration card captured by the camera 700 .
- the process of determining the area to be projected is the process of calibrating the projection area of the optical machine 200 .
- project the playback content to the area to be projected is a calibration white card and a calibration chart card.
- the controller in the step of correcting the area to be projected by the optical machine 200 according to the calibration card image captured by the camera 700, is further configured to: determine the image position of the calibration feature point in the calibration card image, and according to the image The position acquires a projection relationship, wherein the projection relationship is a mapping relationship between the playback content projected by the optical machine 200 and the projection surface 400 . Finally, the area to be projected is corrected according to the projection relationship.
- the controller is further configured to: acquire the camera coordinates of the corrected feature points in the camera coordinate system according to the image position. Convert the camera coordinates to the optical-mechanical coordinates of the corrected feature points in the optical-mechanical coordinate system.
- the homography matrix of the projection surface 400 in the optical-mechanical coordinate system is obtained according to the optical-mechanical coordinates, and the homography matrix is used to represent the projection relationship.
- the controller can control the camera 700 to photograph the calibration chart to obtain a calibration card image to obtain the positions of the calibration feature points.
- the controller can perform image recognition processing on the calibration card image to obtain the first position of the calibration feature point in the calibration image.
- the first position refers to the coordinate information of the calibration feature point in the image coordinate system corresponding to the calibration card image.
- the image coordinate system refers to the coordinate system with the center of the image as the coordinate origin and the X and Y axes parallel to the two sides of the image.
- the image coordinate system in the embodiment of the present application can be set as follows: for the projection area preset by the user, the center point of the projection area can be set as the origin, the horizontal direction is the X axis, and the vertical direction is the Y axis.
- the high image coordinate system can be set in advance according to the preset projection area.
- the coordinate information of the corrected feature points in the corrected image can be determined according to the image coordinate system.
- the camera coordinate system is specifically: a direct coordinate system with the light spot of the camera 700 as the center, the optical axis as the Z axis, and a space parallel to the projection plane as the XOY plane.
- the controller may convert the coordinate information into the coordinates of the calibration feature point in the optical-mechanical coordinate system.
- the projection device can obtain the angle relationship between the first correction coordinates and the second correction coordinates through a preset correction algorithm.
- the projection relationship refers to the projection relationship in which the projection device projects an image onto the projection surface, specifically, the mapping relationship between the playback content of the projection device's optical-mechanical projection and the projection surface.
- the optical-mechanical coordinate system is specifically: a direct coordinate system in which a space is established with the optical point of the optical machine as the center, the optical axis as the Z-axis, and parallel to the projection plane as the XOY plane. It should be noted that the optical-mechanical coordinate system and the camera coordinate system can be converted to each other, so the coordinates of the corrected feature points in the camera coordinate system can be converted into coordinates in the optical-mechanical coordinate system.
- the projection surface equation of the projection surface in the optical-mechanical coordinate system can be determined.
- the controller can obtain the first positions of at least three calibration feature points in at least the calibration card image, and determine the coordinate information in the camera coordinate system according to the first positions of these calibration feature points. Further, the coordinate information in the camera coordinate system can be converted into coordinate information in the optical machine coordinate system.
- the controller can fit the coordinates of these calibration feature points, so as to obtain the homography matrix of the projection surface in the optical-mechanical coordinate system.
- the homography matrix that is, the transformation matrix, can represent the mapping relationship between the playback content projected by the optical machine 200 and the projection surface.
- the camera 700 captures the image in real time. At the same time, if a person is detected entering the image, the projection device will automatically trigger the eye protection mode. In this way, in order to reduce the damage of the light source to human eyes, the projection device can change the brightness of the image by adjusting the brightness of the light source, thereby achieving the effect of protecting human eyes.
- the distance data at the power-on time and the distance data at the historical power-off time are acquired through the controller. And according to the distance data at the power-on time and the distance data at the historical power-off time, the distance change data at the power-on time relative to the historical power-off time is calculated. If the distance change data is greater than the preset distance threshold, control the optical machine 200 to project the calibration card image to the projection surface, so as to correct the area to be projected by the optical machine 200 according to the calibration card image captured by the camera 700, and the area to be projected is used to present playback content.
- the calibration process will not be triggered again when the user powers it on again. Avoid the situation that the projection device will automatically trigger the calibration process no matter whether the environment changes after the user turns on the projection device several times, thereby improving the user experience.
- the present application also provides a trigger correction method, which is applied to a projection device, and the projection device includes a distance sensor 600, an optical machine 200, a camera 700, and a controller; the method specifically includes the following steps: obtaining the distance data and the distance data of the historical shutdown time; according to the distance data of the power-on time and the distance data of the historical power-off time, the distance change data of the power-on time relative to the historical power-off time is calculated; if the distance change data is greater than the preset distance threshold, the control light
- the machine projects the image of the calibration card to the projection surface, so as to correct the area to be projected of the optical machine according to the image of the calibration card captured by the camera, and the area to be projected is used to present the playback content.
- the above method includes: parsing the power-on distance value in the distance data at the power-on moment, and parsing the historical power-off distance value in the distance data at the historical power-off time; if the power-on distance value and the historical power-off distance value are within the effective distance range , the process of calculating the distance change data is performed; the effective distance range is set according to the setting positions of the distance sensor and the projection surface.
- the above method includes: in the step of calculating the distance change data, making a difference between the historical power-off distance value and the power-on distance value, and extracting the absolute value of the difference to generate a distance difference value; calculating the distance difference value and power-on distance value The ratio of the distance value; if the ratio is greater than the preset ratio threshold, the process of projecting the calibration card to the projection surface is performed.
- the above method includes: obtaining the distance data at the current moment; parsing the current distance value in the distance data; if the current distance value is within the effective distance range, assigning the data label corresponding to the distance data at the current moment as the first A state flag, the first state flag is used to represent the effective distance of the current distance value; if the current distance value is not within the effective distance range, the data label corresponding to the distance data at the current moment is assigned as the second state flag, the second The status flag is used to indicate that the current distance value is an invalid distance.
- an acceleration sensor is also included, and the acceleration sensor is configured to collect acceleration data; the method includes: before the step of controlling the optical-mechanical projection correction card to the projection surface, acquiring the acceleration data at the time of power-on and the acceleration data at the time of historical power-off ; According to the acceleration data at the startup time and the acceleration data at the historical shutdown time, calculate the acceleration change data; if the acceleration change data is less than the preset acceleration threshold, execute the step of projecting the calibration card to the projection surface.
- the above method includes: analyzing the movement angles of the projection device in the coordinate directions of the three axes in the acceleration change data; if the movement angles in the coordinate directions of the three axes are all less than a preset acceleration threshold, determine the acceleration The change data is less than the preset acceleration threshold.
- the calibration card contains the calibration feature points; the above method includes: in the step of correcting the area to be projected by the optical machine according to the calibration card image captured by the camera, determining the image position of the calibration feature point in the calibration card image, The projection relationship is obtained according to the image position, wherein the projection relationship is the mapping relationship between the playback content of the optical machine projection and the projection surface; and the area to be projected is corrected according to the projection relationship.
- the above method includes: in the step of obtaining the projection relationship according to the image position, obtaining the camera coordinates of the corrected feature points in the camera coordinate system according to the image position; converting the camera coordinates into the corrected feature points in the optical-mechanical coordinate system The optical-mechanical coordinates under ; obtain the homography matrix of the projection surface in the optical-mechanical coordinate system according to the optical-mechanical coordinates, and the homography matrix is used to represent the projection relationship.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Transforming Electric Information Into Light Information (AREA)
Abstract
本申请提供了一种投影设备和校正方法。投影设备包括光机,相机和控制器。当获取到图像校正指令,投影设备的光机投射包含校正特征点的校正图卡至投影面,同时相机对校正图卡进行拍摄,得到校正图像。投影设备可以确定校正特征点在校正图像中的第一位置,并根据第一位置获取投射关系。投射关系表现为光机投射的播放内容和投影面之间的映射关系。投影设备根据投射关系确定投影面中的待投影区域的目标位置信息,并控制光机投射播放内容至待投影区域,实现图像校正。投影设备可以根据校正图卡确定光机和投影面之间的映射关系,从而准确的确定出待投影区域的位置,进而能够准确校正投影图像,从而提高了用户的使用体验。
Description
相关申请的交叉引用
本申请要求在2021年11月16日提交、申请号为202111355866.0;在2022年03月29日提交、申请号为202210325721.4;在2022年05月24日提交、申请号为202210570426.5的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及投影设备技术领域,尤其涉及一种投影设备和校正方法。
投影设备是一种可以将图像或视频投射到屏幕上的显示设备。投影设备可以将特定颜色的激光光线通过光学透镜组件的折射作用,投射到屏幕上形成具体影像。
在使用投影设备时,如果投影设备的位置发生了偏移,或者投影设备与投影面不垂直,会导致投影设备不能将图像完全投射到预设的投影区域,从而出现投影画面产生梯形的情况,给用户的体验性较差。现有的投影设备在对图像进行校正时,用户可以控制投影设备调整投射角度,从而控制投影图像的显示位置和形状,实现图像校正。然而,这种方式需要用户手动选择调整方向,过程繁琐并且不能够准确的校正投影图像,给用户的体验性较差。
发明内容
第一方面,本申请提供了一种投影设备,包括:光机,被配置为投射播放内容至投影面;相机,被配置为拍摄所述投影面中显示的图像;控制器,被配置为:响应于图像校正指令,控制所述光机投射校正图卡至所述投影面,并控制所述相机对所述校正图卡进行拍摄,得到校正图像;所述校正图卡中包含校正特征点;确定所述校正特征点在所述校正图像中的第一位置,并根据所述第一位置获取投射关系;所述投射关系为所述光机投射的播放内容,和所述投影面之间的映射关系;根据所述投射关系确定所述投影面中待投影区域的目标位置信息;根据所述目标位置信息,控制所述光机投射播放内容至所述待投影区域。
第二方面,本申请提供了一种用于投影设备的图像校正方法,所述方法包括:响应于图像校正指令,控制所述光机投射校正图卡至所述投影面,并控制所述相机对所述校正图卡进行拍摄,得到校正图像;所述校正图卡中包含校正特征点;确定所述校正特征点在所述校正图像中的第一位置,并根据所述第一位置获取投射关系;所述投射关系为所述光机投射的播放内容,和所述投影面之间的映射关系;根据所述投射关系确定所述投影面中待投影区域的目标位置信息;根据所述目标位置信息,控制所述光机投射播放内容至所述待投影区域。
图1为一些实施例中投影设备的摆放示意图;
图2为一些实施例中投影设备光路示意图;
图3为一些实施例中投影设备的电路架构示意图;
图4为一些实施例中投影设备的结构示意图;
图5为一些实施例中投影设备的电路结构示意图;
图6为一些实施例中投影设备位置改变时的示意图;
图7为一些实施例中投影设备实现显示控制的系统框架示意图;
图8为一些实施例中投影设备各部件的交互流程图;
图9为一些实施例中校正图卡的示意图;
图10为一些实施例中校正图卡的示意图;
图11为一些实施例中获取投射关系的流程示意图;
图12为一些实施例中校正图卡的示意图;
图13为一些实施例中图像校正前后的示意图;
图14为一些实施例中每种颜色的HSV值的示意图;
图15为一些实施例中灰度值曲线的示意图;
图16为一些实施例中灰度值曲线的示意图;
图17为一些实施例中灰度值曲线的示意图;
图18为图像校正方法的一个实施例的流程示意图;
图19A为本申请另一实施例投影设备实现放射眼功能的信令交互时序示意图;
图19B为本申请另一实施例投影设备实现显示画面校正功能的信令交互时序示意图;
图19C为本申请另一实施例投影设备实现自动对焦算法的流程示意图;
图19D为本申请另一实施例投影设备实现梯形校正、避障算法的流程示意图;
图19E为本申请另一实施例投影设备实现入幕算法的流程示意图;
图19F为本申请另一实施例投影设备实现防射眼算法的流程示意图;
图20为本申请实施例中投影设备的镜头结构示意图;
图21为本申请实施例中投影设备的距离传感器和相机结构示意图;
图22为本申请实施例中投影设备基于距离数据触发图像校正的示意图;
图23为本申请实施例中投影设备基于加速度数据触发图像校正的示意图。
为使本申请的目的、实施方式和优点更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施方式进行清楚、完整地描述,显然,所描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。
本申请实施例可以应用于各种类型的投影设备。下文中将以投影设备为例进行阐述。
投影设备是一种可以将图像、或视频投射到屏幕上的设备,投影设备可以通过不同的接口同计算机、广电网络、互联网、VCD(Video Compact Disc:视频高密光盘)、DVD(Digital Versatile Disc Recordable:数字化视频光盘)、游戏机、DV等相连接播放相应的视频信号。投影设备广泛应用于家庭、办公室、学校和娱乐场所等。
图1为一些实施例中投影设备的摆放示意图。在一些实施例中,本申请提供的一种投影设备包括投影屏幕1和用于投影的装置2。投影屏幕1固定于第一位置上,用于的投影装置2放置于第二位置上,使得其投影出的画面与投影屏幕1吻合。
图2为一些实施例中投影设备光路示意图。
本申请实施例提供了一种投影设备,包括激光光源100,光机200,镜头300,投影介质400。其中,激光光源100为光机200提供照明,光机200对光源光束进行调制,并输出至镜头300进 行成像,投射至投影介质400形成投影画面。
在一些实施例中,投影设备的激光光源100包括激光器组件和光学镜片组件,激光器组件发出的光束可透过光学镜片组件进而为光机提供照明。其中,例如,光学镜片组件需要较高等级的环境洁净度、气密等级密封;而安装激光器组件的腔室可以采用密封等级较低的防尘等级密封,以降低密封成本。
在一些实施例中,投影设备的光机200可实施为包括蓝色光机、绿色光机、红色光机,还可以包括散热系统、电路控制系统等。需要说明的是,在一些实施例中,投影设备的发光部件还可以通过LED光源实现。
在一些实施例中,本申请提供了一种投影设备,包括三色光机和控制器;其中,三色光机用于调制生成用户界面包含像素点的激光,包括蓝色光机、绿色光机和红色光机;控制器被配置为:获取用户界面的平均灰度值;判定所述平均灰度值大于第一阈值、且其持续时间大于时间阈值时,控制所述红色光机的工作电流值按照预设梯度值降低,以减小所述三色光机的发热。可以发现,通过降低三色光机中所集成红色光机的工作电流,可以实现控制所述红色光机的过热,以实现控制三色光机、及投影设备的过热。
光机200可实施为三色光机,所述三色光机集成蓝色光机、绿色光机、红色光机。
下文中将以投影设备的光机200实施为包括蓝色光机、绿色光机、红色光机为例,对本申请实施方式进行阐述。
在一些实施例中,投影设备的光学系统由光源部分和光机部分组成,光源部分的作用是为光机提供照明,光机部分的作用是对光源提供的照明光束进行调制,最后通过镜头出射形成投影画面。
图3为一些实施例中投影设备的电路架构示意图。在一些实施例中,该投影设备可以包括显示控制电路10、激光光源20、至少一个激光器驱动组件30以及至少一个亮度传感器40,该激光光源20可以包括与至少一个激光器驱动组件30一一对应的至少一个激光器。其中,该至少一个是指一个或多个,多个是指两个或两个以上。
基于该电路架构,投影设备可以实现自适应调整。例如,通过在激光光源20的出光路径中设置亮度传感器40,使亮度传感器40可以检测激光光源的第一亮度值,并将第一亮度值发送至显示控制电路10。
该显示控制电路10可以获取每个激光器的驱动电流对应的第二亮度值,并在确定该激光器的第二亮度值与该激光器的第一亮度值的差值大于差值阈值时,确定该激光器发生COD(Catastrophic optical damage)故障;则显示控制电路可以调整激光器的对应的激光器驱动组件的电流控制信号,直至该差值小于等于该差值阈值,从而消除该蓝色激光器的COD故障;该投影设备能够及时消除激光器的COD故障,降低激光器的损坏率,提高投影设备的图像显示效果。
在一些实施例中,激光光源20包括与激光器驱动组件30一一对应的三个激光器,该三个激光器可以分别为蓝色激光器201、红色激光器202和绿色激光器203。其中,该蓝色激光器201用于出射蓝色激光,该红色激光器202用于出射红色激光,该绿色激光器203用于出射绿色激光。在一些实施例中,激光器驱动组件30可实施为包含多个子激光器驱动组件,分别对应不同颜色的激光器。
显示控制电路10用于向激光器驱动组件30输出基色使能信号以及基色电流控制信号,以驱动激光器发光。显示控制电路10与激光器驱动组件30连接,用于输出与多帧显示图像中的每一帧图像的三种基色一一对应的至少一个使能信号,将至少一个使能信号分别传输至对应的激光器 驱动组件30,并输出与每一帧图像的三种基色一一对应的至少一个电流控制信号,将至少一个电流控制信号分别传输至对应的激光器驱动组件30。示例的,该显示控制电路10可以为微控制单元(microcontroller unit,MCU),又称为单片机。其中,该电流控制信号可以是脉冲宽度调制(pulse width modulation,PWM)信号。
在一些实施例中,蓝色激光器201、红色激光器202和绿色激光器203分别与激光器驱动组件30连接。激光器驱动组件30可以响应于显示控制电路10发送的蓝色PWM信号和使能信号,向该蓝色激光器201提供对应的驱动电流。该蓝色激光器201用于在该驱动电流的驱动下发光。
亮度传感器设置于激光光源的出光路径中,通常设置在出光路径的一侧,而不会遮挡光路。如图2所示,至少一个亮度传感器40设置在激光光源20的出光路径中,该每个亮度传感器与显示控制电路10连接,用于检测一个激光器的第一亮度值,并将第一亮度值发送至显示控制电路10。
在一些实施例中,显示控制电路10可以从该对应关系中获取每个激光器的驱动电流对应的第二亮度值,该驱动电流为激光器当前的实际工作电流,该驱动电流对应的第二亮度值为激光器在该驱动电流的驱动下正常工作时能够发出的亮度值。该差值阈值可以为显示控制电路10中预先存储的固定数值。
在一些实施例中,显示控制电路10在调整与激光器对应的激光器驱动组件30的电流控制信号时,可以降低与激光器对应的激光器驱动组件30的电流控制信号的占空比,从而降低激光器的驱动电流。
在一些实施例中,亮度传感器40可以检测蓝色激光器201的第一亮度值,并将该第一亮度值发送至显示控制电路10。该显示控制电路10可以获取该蓝色激光器201的驱动电流,并从电流与亮度值的对应关系中获取该驱动电流对应的第二亮度值。之后检测第二亮度值与第一亮度值之间的差值是否大于差值阈值,若该差值大于差值阈值,表明该蓝色激光器201发生COD故障,则显示控制电路10可以降低与该蓝色激光器201对应的激光器驱动组件30的电流控制信号。之后显示控制电路10可以再次获取蓝色激光器201的第一亮度值,以及蓝色激光器201的驱动电流对应的第二亮度值,并在第二亮度值与第一亮度值之间的差值大于差值阈值时,再次降低与该蓝色激光器201对应的激光器驱动组件30的电流控制信号。如此循环,直至该差值小于等于差值阈值。由此通过降低蓝色激光器201的驱动电流,消除该蓝色激光器201的COD故障。
图4为一些实施例中投影设备的结构示意图。
在一些实施例中,该投影设备中的激光光源20可以包括独立设置的蓝色激光器201、红色激光器202和绿色激光器203,该投影设备也可以称为三色投影设备,蓝色激光器201、红色激光器202和绿色激光器203均为模块轻量化(Mirai Console Loader,MCL)封装激光器,其体积小,利于光路的紧凑排布。
在一些实施例中,控制器包括中央处理器(Central Processing Unit,CPU),视频处理器,音频处理器,图形处理器(Graphics Processing Unit,GPU),RAM Random Access Memory,RAM),ROM(Read-Only Memory,ROM),用于输入/输出的第一接口至第n接口,通信总线(Bus)等中的至少一种。
在一些实施例中,投影设备可以配置相机,用于和投影设备协同运行,实现对投影过程的调节控制。例如,投影设备配置的相机可具体实施为3D相机,或双目相机;在相机实施为双目相机时,具体包括左相机、以及右相机;双目相机可获取投影设备对应的幕布,即投影面所呈现的图像及播放内容,该图像或播放内容由投影设备内置的光机进行投射。
当投影设备移动位置后,其投射角度、及至投影面距离发生变化,会导致投影图像发生形变,投影图像会显示为梯形图像、或其他畸形图像;投影设备控制器可基于相机拍摄的图像,通过耦合光机投影面之间夹角和投影图像的正确显示实现自动梯形校正。
图5为一些实施例中投影设备的电路结构示意图。在一些实施例中,激光器驱动组件30可以包括驱动电路301、开关电路302和放大电路303。该驱动电路301可以为驱动芯片。该开关电路302可以为金属氧化物半导体(metal-oxide-semiconductor,MOS)管。其中,该驱动电路301分别与开关电路302、放大电路303以及激光光源20所包括的对应的激光器连接。该驱动电路301用于基于显示控制电路10发送的电流控制信号通过VOUT端向激光光源20中对应的激光器输出驱动电流,并通过ENOUT端将接收到的使能信号传输至开关电路302。其中,该激光器可以包括串联的n个子激光器,分别为子激光器LD1至LDn。n为大于0的正整数。
开关电路302串联在激光器的电流通路中,用于在接收到的使能信号为有效电位时,控制电流通路导通。放大电路303分别与激光光源20的电流通路中的检测节点E和显示控制电路10连接,用于将检测到的激光器组件的驱动电流转换为驱动电压,放大该驱动电压,并将放大后的驱动电压传输至显示控制电路10。显示控制电路10还用于将放大后的驱动电压确定为激光器的驱动电流,并获取该驱动电流对应的第二亮度值。
在一些实施例中,放大电路303可以包括:第一运算放大器A1、第一电阻(又称取样功率电阻)R1、第二电阻R2、第三电阻R3和第四电阻R4。
在一些实施例中,显示控制电路10、驱动电路301、开关电路302和放大电路303形成闭环,以实现对该激光器的驱动电流的反馈调节,从而使得该显示控制电路10可以通过激光器的第二亮度值与第一亮度值的差值,及时调节该激光器的驱动电流,也即是及时调节该激光器的实际发光亮度,避免激光器长时间发生COD故障,同时提高了对激光器发光控制的准确度。需要说明的是,若激光光源20包括一个蓝色激光器、一个红色激光器和一个绿色激光器。该蓝色激光器201可以设置在L1位置处,该红色激光器202可以设置在L2位置处,绿色激光器203可以设置在L3位置处。
L1位置处的激光经过第四二向色片604一次透射,再经过第五二向色片605反射一次后进入第一透镜901中。该L1位置处的光效率P1=Pt×Pf。其中,Pt表示的是二向色片的透射率,Pf表示的是二向色片或者第五反射率的反射率。
在一些实施例中,在L1、L2和L3三个位置中,L3位置处的激光的光效率最高,L1位置处的激光的光效率最低。由于蓝色激光器201输出的最大光功率Pb=4.5瓦(W),红色激光器202输出的最大光功率Pr=2.5W,绿色激光器203输出的最大光功率Pg=1.5W。即蓝色激光器201输出的最大光功率最大,红色激光器202输出的最大光功率次之,绿色激光器203输出的最大光功率最小。因此将绿色激光器203设置在L3位置处,将红色激光器202设置在L2位置处,将蓝色激光器201设置在L1位置处。也即是将绿色激光器203设置在光效率最高的光路中,从而确保投影设备能够获得最高的光效率。
在一些实施例中,投影设备启动后可以直接进入上次选择的信号源的显示界面,或者信号源选择界面,其中信号源可以是预置的视频点播程序,还可以是HDMI接口,直播电视接口等中的至少一种,用户选择不同的信号源后,投影机可以显示从不同信号源获得的内容。当用户开启投影设备后,投影设备可以将用户预先设置好的内容投射到投影面中,可以是墙面或者幕布,投影面中可以显示出投影图像,以供用户进行观看。然而,当用户摆放投影设备的位置不当时,投影设备与投影面有可能出现不垂直的情况,导致投影图像会显示为梯形图像或其他畸形图像。或者 在使用过程中,用户碰到了投影设备导致投影设备的位置发生了改变,其投射角度、及至投影面距离发生变化,从而导致投影图像发生形变,投影图像也会显示为梯形图像或其他畸形图像。对于上述情况,需要对投影设备进行图像校正处理,使得投影设备能够在投影面中投射出标准形状的图像,以便用户观看。图6为一些实施例中投影设备位置改变时的示意图。投影设备开始处于A位置,可以投射到合适的投影区域,投影区域呈矩形,通常可完整、准确的覆盖对应的矩形幕布。当投影设备由A位置移动至B位置后,经常会生成畸形投影图像,例如出现梯形图像,造成投影图像与矩形幕布不吻合的问题。
相关技术中,用户可以手动控制投影设备调整投射角度,从而控制投影图像的显示位置和形状,实现图像校正。然而这种手动校正的方式过程繁琐并且不能够准确的校正投影图像。投影设备也可基于深度学习神经网络,通过耦合光机投影面之间夹角和投影图像的正确显示实现自动梯形校正。但是,该修正方法修正速度慢,且需要大量的场景数据进行模型训练才能达到一定的精度,不适合用户设备发生的即时、快速校正场景。
本申请实施例提供的投影设备能够快速准确的进行图像校正,以解决上述问题。
在一些实施例中,投影设备可以包括如上所述的光机、相机和控制器。其中,光机用于将用户预先设定的播放内容投射至投影面,投影面可以是墙面或者幕布。
相机用于拍摄投影面中显示的图像,可以是摄像头。摄像头可以包括镜头组件,镜头组件中设有感光元件和透镜。透镜通过多个镜片对光线的折射作用,使景物的图像的光能够照射在感光元件上。感光元件可以根据摄像头的规格选用基于电荷耦合器件或互补金属氧化物半导体的检测原理,通过光感材料将光信号转化为电信号,并将转化后的电信号输出成图像数据。通过相机拍摄的图像可以对投影设备进行图像校正。
投影设备具有图像校正功能。当投影设备在投影面中投射的投影图像出现梯形等畸形图像时,投影设备可以自动对投影图像进行校正,得到规则形状的图像,可以是矩形图像,从而实现图像校正。
具体可以设置为,当接收到图像校正指令时,投影设备可以开启图像校正功能,对投影图像进行校正。图像校正指令是指用于触发投影设备自动进行图像校正的控制指令。
在一些实施例中,图像校正指令可以是用户主动输入的指令。例如,在接通投影设备的电源后,投影设备可以在投影面上投射出图像。此时,用户可以按下投影设备中预先设置的图像校正开关,或投影设备配套遥控器上的图像校正按键,使投影设备开启图像校正功能,自动对投影图像进行图像校正。
在一些实施例中,图像校正指令也可以根据投影设备内置的控制程序自动生成。
可以是投影设备在开机后主动生成图像校正指令。例如,当投影设备检测到开机后的首次视频信号输入时,可以生成图像校正指令,从而触发图像校正功能。
也可以是投影设备在工作过程中,自行生成图像校正指令。考虑到用户在使用投影设备的过程中,可能主动移动投影设备或者不小心碰到投影设备,导致投影设备的摆放姿态或设置位置发生改变,投影图像也可能变为梯形图像。此时,为了保证用户的观看体验性,投影设备可以自动进行图像校正。
具体的,投影设备可以实时检测自身的情况。当投影设备检测到自身摆放姿态或设置位置发生改变时,可以生成图像校正指令,从而触发图像校正功能。
在一些实施例中,投影设备通过其配置组件可实时监测设备的移动,并将监测结果即时反馈至投影设备控制器,以实现投影设备移动后,控制器即时启动图像校正功能在第一时间实现投影 图像自动校正。例如,通过投影设备配置的陀螺仪、或TOF(Time of Flight,飞行时间)传感器,控制器接收来自陀螺仪、TOF传感器的监测数据,以判定投影设备是否发生移动。
在判定投影设备发生移动后,控制器可以生成图像校正指令,并开启图像校正功能。
其中,飞行时间传感器通过调节发射脉冲的频率改变实现距离测量和位置移动监测,其测量精度不会随着测量距离的增大而降低,且抗干扰能力强。
图7为一些实施例中投影设备实现显示控制的系统框架示意图。
在一些实施例中,本申请提供的投影设备具备长焦微投的特点,投影设备包括控制器,所述控制器通过预设算法可对光机画面进行显示控制,以实现显示画面自动梯形校正、自动入幕、自动避障、自动对焦、以及防射眼等功能。可以理解,投影设备通过基于几何校正的显示控制方法,可实现长焦微投场景下的灵活位置移动;设备在每次移动过程中,针对可能出现的投影画面失真、投影面异物遮挡、投影画面从幕布异常等问题,控制器可控制投影设备实现自动显示校正功能,使其自动恢复正常显示。
在一些实施例中,基于几何校正的显示控制系统,包括应用程序服务层(APK Service:Android application package Service)、服务层、以及底层算法库。
应用程序服务层用于实现投影设备和用户之间的交互;基于用户界面的显示,用户可对投影设备的各项参数以及显示画面进行配置,控制器通过协调、调用各种功能对应的算法服务,可实现投影设备在显示异常时自动校正其显示画面的功能。
服务层可包括校正服务、摄像头服务、飞行时间(TOF)服务等内容,服务向上对应于应用程序服务层(APK Service),实现投影设备不同配置服务的对应特定功能;服务层向下对接算法库、相机、飞行时间传感器等数据采集业务,实现封装底层复杂逻辑。
底层算法库提供校正服务、以及投影设备实现各种功能的控制算法,算法库可基于OpenCV(基于许可开源)完成各种数学运算,为校正服务提供基础计算能力;OpenCV是一个基于BSD许可开源发行的跨平台计算机视觉、机器学习软件库,可以运行在多种现有操作系统环境中。
在一些实施例中,投影设备配置有陀螺仪传感器;设备在移动过程中,陀螺仪传感器可感知位置移动、并主动采集移动数据;然后通过系统框架层将已采集数据发送至应用程序服务层,用于支撑用户界面交互、应用程序交互过程中所需应用数据,其中的采集数据还可用于控制器在算法服务实现中的数据调用。
在一些实施例中,投影设备配置有飞行时间(TOF)传感器,在飞行时间传感器采集到相应数据后,所述数据将被发送至服务层对应的飞行时间服务;
上述飞行时间服务获取数据后,将采集数据通过进程通信框架发送至应用程序服务层,数据将用于控制器的数据调用、用户界面、程序应用等交互使用。
在一些实施例中,投影设备配置有用于采集图像的相机,所述相机可实施为双目相机、或深度相机、或3D相机等;
相机采集数据将发送至摄像头服务,然后由摄像头服务将采集图像数据发送至进程通信框架、和/或投影设备校正服务;所述投影设备校正服务可接收摄像头服务发送的相机采集数据,控制器针对所需实现的不同功能可在算法库中调用对应的控制算法。
在一些实施例中,通过进程通信框架、与应用程序服务进行数据交互,然后经进程通信框架将计算结果反馈至校正服务;校正服务将获取的计算结果发送至投影设备操作系统,以生成控制信令,并将控制信令发送至光机控制驱动以控制光机工况、实现显示图像的自动校正。
在一些实施例中,当检测到图像校正指令时,投影设备可以对投影图像进行校正。
对于投影图像的校正,可预先创建距离、水平夹角、及偏移角之间的关联关系。然后投影设备控制器通过获取光机至投影面的当前距离,结合所属关联关系确定该时刻光机与投影面的夹角,实现投影图像校正。其中,所述夹角具体实施为光机中轴线与投影面的夹角。
但是,在一些复杂环境下,可能会发生提前创建的关联关系不能适应所有复杂环境的情况,而导致投影图像校正失败。
为了准确的对投影图像进行校正,投影设备可以重新确定出待投影区域的位置,从而将播放内容投射至待投影区域,实现对投影图像的校正。
图8为一些实施例中投影设备各部件的交互流程图。
在一些实施例中,步骤1,获取到图像校正指令;步骤2,启动拍摄;步骤3,校正图像;步骤4,启动投影;步骤5,播放内容。
具体的,当获取到图像校正指令时,投影设备可以对投影图像进行梯形校正。具体的,可以通过预设的梯形校正算法对投影图像实现校正功能。投影设备可以在投影面中投射校正图卡,并根据该校正图卡确定投影面的位置信息。根据投影面的位置信息,可以确定出投影面和投影设备之间的投射关系。本申请实施例中,投射关系指的是投影设备将图像投射到投影面的投射关系,具体为投影设备的光机投射的播放内容,和投影面之间的映射关系。在确定了投影面和投影设备之间的投射关系后,投影设备可以确定出待投射区域的位置信息,从而进行投射,实现图像校正。
需要说明的是,本申请实施例可以基于双目相机构建世界坐标系下投影面与光机坐标系的转换矩阵,该转换矩阵即为投影面中的投影图像与光机播放的播放图卡之间的单应性关系,该单应性关系也称为投射关系,利用该单应性关系即可实现投影图像与播放图卡之间的任意形状转换。
在一些实施例中,当获取到图像校正指令时,投影设备可以先投射校正图卡。具体的,控制器可以控制光机将校正图卡投射至投影面上。
在投射校正图卡后,控制器还可以控制相机对投影面中显示的校正图卡进行拍摄,得到校正图像。
校正图卡中可以包含复数个校正特征点,因此相机拍摄到的校正图像中也会包含校正图卡中的所有的校正特征点。通过校正特征点可以确定出投影面的位置信息。需要说明的是,对于一个平面来说,当确定出该平面中三个点的位置后,即可确定出该平面的位置信息。因此,为了能够确定出投影面的位置信息,需要确定出投影面中至少三个点的位置,即校正图卡中需要包括至少三个校正特征点。根据这至少三个校正特征点即可确定出投影面的位置信息。
在一些实施例中,校正图卡中可以包含用户预先设定的图案、颜色特征。校正图卡可以是棋盘格图卡,棋盘格图卡设置为黑白相间的棋盘格,如图9所示,棋盘格图卡中包含的校正特征点为矩形的角点。校正图卡中的图案也可设置为圆环图卡,包括圆环图案,如图10所示,圆环图卡中包含的校正特征点为各圆环上对应的实心点。在一些实施例中,校正图卡还可设置为上述两类图案的组合,或者还可以设置为其它具有可识别特征点的图案。
图11为一些实施例中获取投射关系的流程示意图。
在一些实施例中,光机将校正图卡投射到投影面上之后,控制器可以控制相机对校正图卡进行拍摄,得到校正图像,以获取校正特征点的位置。
具体的,相机可以是双目相机,在光机的两侧各设置有一个相机。通过该双目相机对校正图卡进行拍摄,左相机拍摄得到第一校正图像,右相机拍摄得到第二校正图像。
步骤S1101,控制器可以对校正图像进行图像识别处理,得到校正特征点在校正图像中的第一位置。在本申请实施例中,第一位置指的是:在校正图像对应的图像坐标系下,校正特征点的 坐标信息。
图像坐标系指的是:以图像中心为坐标原点,X,Y轴平行于图像两边的坐标系。本申请实施例中的图像坐标系可以设定如下:对于用户预先设定的投影区域,可以将该投影区域的中心点设置为原点,水平方向为X轴,垂直方向为Y轴。根据预设的投影区域可以提前设置高图像坐标系。
可以根据图像坐标系,确定出校正图像中的校正特征点的坐标信息。
对于同一个校正特征点来说,其在左相机拍摄到的第一校正图像中的第一位置,和在右相机拍摄到的第二校正图像中的第一位置,可能是不同的。通过同一个校正特征点的两个第一位置可以确定出该校正特征点在左右相机任一个的相机坐标系下的坐标。
本申请实施例中,相机坐标系具体为:以相机的光点为中心,光轴为Z轴,平行于投影面为XOY面建立空间的直接坐标系。
对于双目相机来说,确定出校正特征点在其中任意一个相机对应的相机坐标系下的坐标即可,本申请实施例中以左相机为例介绍。
具体的,可以根据同一个校正特征点在两张校正图像中的第一位置,确定出该校正特征点在左相机的相机坐标系下的坐标信息,设定为P(x,y,z)。
在一些实施例中,在获取到校正特征点在左相机的相机坐标系下的坐标信息后,控制器可以将该坐标信息换转为校正特征点在光机坐标系下的坐标。
本申请实施例中,光机坐标系具体为:以光机的光点为中心,光轴为Z轴,平行于投影面为XOY面建立空间的直接坐标系。需要说明的是,光机坐标系和相机坐标系之间是可以相互转换的,因此校正特征点在相机坐标系中的坐标可以转换为光机坐标系中的坐标。具体的,可以根据光机相机间的外参,将校正特征点在两个坐标系之间进行转换。光机相机间的外参是投影设备在制造出厂时已标注于设备外壳、或说明书的设备参数,通常基于投影设备的功能、组装、制造、零件而设定,适用于同一型号的所有投影设备,可以包括光机相机间旋转矩阵和平移矩阵。
根据光机相机间的外参,可以确定光机坐标系和相机坐标系之间的转换关系,并进一步得到校正特征点在光机坐标系下的坐标。
转换公式如下:
P′(x′,y′,z′)=RRR*P+TTT (1)
其中:
P′(x′,y′,z′)为校正特征点在光机坐标系中的坐标。
RRR为光机相机间旋转矩阵,TTT为光机相机间平移矩阵。
在一些实施例中,在获取到校正特征点在光机坐标系中的坐标后,可以确定出投影面在光机坐标系下的投影面方程。
步骤S1102,根据第一位置获取校正特征点在相机坐标系下的第一坐标。
需要说明的是,由于需要至少三个点的坐标信息才可以确定出一个平面的位置信息。因此,控制器可以获取至少校正图像中的至少三个校正特征点的第一位置,并根据这些校正特征点的第一位置确定出相机坐标系下的第一坐标。
步骤S1103,进一步可以第一坐标转换为校正特征点在光机坐标系下的第二坐标。
步骤S1104,在确定出至少三个校正特征点在光机坐标系下的第二坐标后,控制器可以对这些校正特征点的坐标进行拟合,从而得到投影面在光机坐标系下的投影面方程。投影面方程可以表示为:
z=ax+by+c (2)
或如下公式:
步骤S1105,在确定了投影面在光机坐标系下的投影面方程后,控制器可以根据投影面方程获取光机坐标系和世界坐标系的转换矩阵,该转换矩阵用于表征投射关系。
在本申请实施例中,世界坐标系设定为:以图像坐标系为XOY面,即投影面为XOY面,其中原点为用户预先设定的投影区域的中心点。以垂直于投影面的方向设定Z轴,建立的空间坐标系。
在获取光机坐标系和世界坐标系的转换矩阵时,控制器可以分别确定投影面在世界坐标系下的表示,以及投影面在光机坐标系下的表示。
具体的,控制器可以先确定投影面在世界坐标系下的单位法向量。
由于投影面本身即为世界坐标系的XOY面,因此投影面在世界坐标系下的单位法向量可以表示为:
m=(0,0,1)*T (4)
控制器还可以根据投影面在光机坐标系下的投影面方程获取投影面在光机坐标系下的单位法向量,该投影面在光机坐标系下的单位法向量可表示为公式:
根据投影面在两个坐标系下的单位法向量,可以获取光机坐标系和世界坐标系之间的转换矩阵,相互关系表示为如下公式:
m=R1*n (6)
其中:R1表示光机坐标系和世界坐标系之间的转换矩阵。
该转换矩阵即可表示光机投射的播放内容,和投影面之间的映射关系。
在确定了映射关系之后,控制器可以实现某个点的坐标在世界坐标系和光机坐标系之间的转换。对于投影面中已经确定的某个目标区域,可以直接确定出其在世界坐标系中的坐标表示。控制器可以根据转换矩阵将世界坐标系中的坐标表示转换为光机坐标系中的坐标表示,从而确定出对于投影设备来说,目标区域的位置信息,从而将播放内容直接投射到该目标盘区域中,可以实现图像校正。
相关技术中,在通过特征图卡确定出某个平面的方程时,一般会将该特征图卡中所有的特征点一同进行识别,并将所有的特征点一同拟合,从而确定出该平面的表示方程。然而,相机拍摄过程中会存在相机畸变,对于拍摄得到的图片来说,距离图片中心越远的区域,畸变越大。对于在畸变较大区域中提取的特征点,对其识别得到的坐标的准确性较差,如果使用这种特征点进行拟合,得到的平面方程也是不准确的,从而对图像校正的精度产生影响,效果较差。
在一些实施例中,为了提高图像校正的准确性,当获取到图像校正指令时,控制器可以向投影面中投射预设的校正图卡,该校正图卡中可以包括用户设定好的若干个校正区域。图12为一些实施例中校正图卡的示意图。校正图卡中可以包括三种校正区域:第一校正区域、第二校正区域和第三校正区域。
三种校正区域可以沿着校正图卡从中心到边缘的方向逐次排列。其中,第一校正区域位于校正图卡的中间区域,如图中的A区域。为了能确定出投影面方程,第一校正区域中需要包括至少 三个校正特征点,第一校正区域可以是棋盘格图案,也可以是圆环图案。第二校正区域位于第一校正区域和第三校正区域之间,如图中的四个B区域,包括B1-B4四个校正区域。每个第二校正区域的面积可以是相同的,都可以小于第一校正区域的面积。每个第二校正区域中都包括用户预先设定的图案,图案中包括至少三个校正特征点,其中,每个第二校正区域的图案可以是相同的,也可以是不同的,本申请实施例不做限定。第三校正区域位于校正图卡的边缘区域,如图中的16个C区域,包括C1-C16共16个校正区域。每个第三校正区域的面积可以是相同的,都小于第二校正区域的面积。每个第三校正区域中都包括至少三个校正特征点,可以包括用户预先设定的图案,其中,每个第三校正区域的图案可以是相同的,也可以是不同的。
由于相机拍摄过程中会存在相机畸变的情况,可以得知在三种校正区域中,第一校正区域的畸变情况最弱,因此根据第一校正区域中的校正特征点确定出的投影面方程的准确性是最高的。第二校正区域的畸变情况次于第一校正区域,因此第二校正区域中的校正特征点确定出的投影面方程的准确性也会低于第一校正区域。第三校正区域的畸变情况最强,因此根据第三校正区域中的校正特征点确定出的投影面方程的准确性也最低。
因此,控制器可以按照如下方式选择校正图像中的校正特征点:
由于相机拍摄得到的校正图像的中间区域的畸变情况最弱,因此可以优先识别中间区域。具体的,控制器可以先对校正图像中的第一校正区域进行识别,得到第一校正区域中至少三个校正特征点的位置信息。根据至少三个校正特征点的位置信息可以确定出投影面方程。
考虑到光机在投射校正图卡时,有部分区域可能因为环境的原因被遮挡住,导致无法显示到投影面中,例如,第一校正区域有可能被遮挡,导致相机拍摄到的校正图像中不存在该第一校正区域,从而影响后续获取投影面方程。因此,如果控制器没有识别到第一校正区域时,可以对其他的校正区域进行识别。
由于第二校正区域的畸变情况会弱于第三校正区域,因此可以优先识别第二校正区域。具体的,控制器可以对校正图像中的第二校正区域进行识别,得到第二校正区域中至少三个校正特征点的位置信息。由于每个第二校正区域中均包含至少三个校正特征点,因此可以对任意一个第二校正区域进行识别,得到至少三个校正特征点的位置信息,也可以对所有的第二校正区域进行识别,得到至少三个校正特征点的位置信息。
如果所有的第二校正区域也被遮挡住,即控制器无法在校正图像中识别到第二校正区域,则可以识别第三校正区域。具体的,控制器可以对校正图像中的第三校正区域进行识别,得到第三校正区域中至少三个校正特征点的位置信息。可以对任意一个第三校正区域进行识别,也可以对所有的第三校正区域进行识别,得到至少三个校正特征点的位置信息。根据至少三个校正特征点的位置信息可以确定出投影面方程。
上述方法中,控制器是对每一种校正区域单独进行识别的,即识别到的至少三个校正特征点均属于同一种校正区域。
在一些实施例中,控制器也可以对校正图像中的所有校正区域进行识别,从而在所有的校正区域中随机识别出至少三个校正特征点的位置信息。这些校正特征点可以属于同一种校正区域,也可以属于不同种类的校正区域。
在一些实施例中,考虑到识别不同的校正区域时,对图像校正的准确性是不同的,可以对投影设备设置多种的识别模式。
例如,可以设置有三种识别模式。在第一识别模式下,控制器只对第一校正区域进行识别,并利用第一校正区域中的校正特征点获取投影面方程,进行图像校正。此时,图像校正的准确性 最高,但相比于整个校正图像来说,第一校正区域中的校正特征点的数量较少,因此识别出第一校正区域中的校正特征点的适应性较低。一旦第一校正区域被遮挡,便无法识别到校正特征点。
在第二识别模式下,控制器可以对第一校正区域和第二校正区域进行识别,并利用两种校正区域中的校正特征点进行图像校正。相比于第一识别模式,图像校正的准确性会较低,但识别出的校正特征点的适应性较高。当第一校正区域被遮挡时,可以利用第二校正区域识别校正特征点。
在第三识别模式下,控制器可以对所有的校正区域进行识别,从而可以识别出所有校正区域中的校正特征点。相比于第一识别模式和第二识别模式,图像校正的准确性会较低,但识别出的校正特征点的适应性较高。即使第一校正区域和第二校正区域均被遮挡,也可以利用第三校正区域识别校正特征点。
用户可以根据需要选择不同的识别模式。
在一些实施例中,在确定出投射关系后,即光机坐标系和世界坐标系之间的转换矩阵后,控制器可以利用该投射关系确定投影面中的待投影区域的目标位置信息。本申请实施例中目标位置信息指的是待投影区域在光机坐标系下的位置信息。
控制器可以先确定出投影面中的待投影区域在世界坐标系中的位置信息。再根据转换矩阵将世界坐标系中的位置信息转换为光机坐标系下的位置信息。控制器可以控制光机将播放内容投射到目标位置信息处,即投射到待投影区域中。
在一些实施例中,在获取待投影区域的目标位置信息时,控制器可以先确定具体的待投影区域。
待投影区域可以是预先设定好的投影区域。具体的,可以由专业售后技术人员操作,也可以是用户自行设定的一个固定投影区域,通过将投影设备设置在工作时实现最佳投射效果的摆放位置处,投射到投影面中的显示区域。
该预设的投影区域在投影面中的位置信息是确定的,因此可以直接确定出该投影区域在投影面中,也即图像坐标系下的位置信息。投影区域一般设置为矩形区域,通过该矩形区域的四个顶点的坐标即可表示出该矩形区域的位置信息。设定矩形区域中一个顶点的坐标为(x,y)。由于投影面为世界坐标系的XOY平面,原点为预设的投影区域的中心点。因此,该顶点坐标在世界坐标系中的表示为(x,y,z)。
控制器可以根据转换矩阵将世界坐标系中的坐标表示转换为光机坐标系中的坐标表示,因此该顶点坐标在世界坐标系中的表示为(x’,y’,z)。控制器进一步可以获取到预设的投影区域的所有顶点在光机坐标系中的坐标,从而确定出该投影区域在光机坐标系中的位置信息。控制可以控制光机将播放内容投射到该位置信息处,实现图像校正。
在一些实施例中,在确定待投影区域时,控制器可以获取校正图卡对应的校正图像的最大内接矩形,并将该最大内接矩形确定为待投影区域。
其中,最大内接矩形的尺寸比例(长宽比)可以和校正图卡的尺寸比例是相同的。在校正图像中校正图卡的显示区域中,根据校正图卡的尺寸比例不断修正矩形的尺寸,直到矩形的四个顶点全部位于校正图卡的显示区域的边缘上,从而得到最大内接矩形。
控制器可以获取该最大内接矩形在光机坐标系下的位置信息,并控制光机将播放内容投射到该最大内接矩形中,实现图像校正。
图13为一些实施例中图像校正前后的示意图。
在一些实施例中,投影设备在将播放内容投射至投影面中时,如果投影设备和投影面之间存在障碍物,将会导致部分播放内容无法投射到投影面中。因此,投影设备可以进行避障处理,避 开障碍物,确定出一块准确的投影区域,保证投射到该投影区域中的播放内容是完整的。
具体的,投影设备具有避障功能,用户可以选择开启或者关闭该功能。当投影设备关闭避障功能时,投影设备不会进行避障处理。当投影设备开启避障功能时,控制器可以控制光机投射避障图卡至投影面,并控制相机对避障图卡进行拍摄,得到避障图像。其中,避障图卡可以是纯白色图卡。
在一些实施例中,当检测到投影设备开启避障功能时,控制器可以同步进行梯形校正和避障处理。
控制器可以控制光机先投射避障图卡至投影面并控制相机对避障图卡进行拍摄,并将避障图像进行保存,进而根据避障图像进行避障处理。在拍摄避障图卡后,控制器可以控制光机投射校正图卡至投影面并控制相机对校正图卡进行拍摄,从而根据校正图像进行梯形校正处理。
在一些实施例中,在控制相机拍摄到避障图卡对应的避障图像后,可以对避障图像进行预处理,保证避障图像中只保留避障图卡对应的显示区域。控制器可以进一步根据避障图像进行避障处理,以确定投影面中的待投影区域。
相关技术中,在对投影设备进行避障处理时,对于拍摄到的图像,当有障碍物在里边时,一般利用二值化去提取障碍物轮廓,实现避障。然而,二值化是一个固定阈值,当用户场景改变后(如白天到夜晚),光的亮度,障碍物的灰度值均会发生变化,该固定阈值明显不适应所有的场景,因此无法精确的实现避障。
在一些实施例中,控制器可以对避障图像进行HSV(Hue,Saturation,Value)预处理。具体的,可以将避障图像转化到HSV空间中,HSV空间定义了各种颜色的HSV值,因此可以确定出避障图像中所有像素点的HSV值。图14为一些实施例中每种颜色的HSV值的示意图。可以看出,对于白色来说,HSV值为:
0<H<180,0<S<30,221<V<255
即对于避障图卡,由于是纯白色图卡,其中的像素点的HSV值应满足白色的HSV值范围。
控制器可以检测避障图像中每个像素点的HSV值是否满足白色HSV值。
在一些实施例中,考虑到误差影响,控制器可以预先设置一个HSV阈值,该HSV阈值可以比白色HSV值的范围略大,例如:0<H<180,0<S<50,200<V<255。
控制器可以检测避障图像中每个像素点的HSV值是否符合预设的HSV阈值。对于不符合条件的像素点设为第一类像素点,符合条件的像素点设置为第二类像素点。
控制器可以检测第一类像素点的数量是否满足预设的第一数量条件。第一数量条件可以设定为:第一类像素点的数量没有超过总像素点数量的一半。
如果满足条件,则控制器可以将所有的第一类像素点的灰度值设置为预设数值,预设数值可以是0。如果不满足条件,即第一类像素点的数量大于总数量的一半,则不进行处理。
当投影设备投影到其他背景下(如黄色、蓝色的背景下),采用HSV转换后,会将大部分投影区域当成障碍物,造成错误避障,因而采用上述方法,判断HSV阈值范围外的像素个数,可以避免这种特殊情况。
在一些实施例中,在对避障图像进行HSV预处理后,控制器可以进一步对避障图像进行避障处理。
控制器可以对避障图像进行灰度处理,并统计每个灰度值对应的像素点数量。可以得到灰度值曲线,如图15所示。灰度值曲线用于表征灰度值和像素点数量的关系。其中,X轴表示灰度值,Y轴表示像素点数量。
控制器可以先确定出像素点数量最多的灰度值,设定为第一灰度值,如图中的点A。以第一灰度值为中心,在第一灰度值的两侧分别寻找一个最近的灰度值极小值。可以看出,图中的点B为左侧的极小值点,点B为右侧的极小值点。从而确定出点B和点C对应的两个灰度值极小值。
控制器可以统计这两个灰度值极小值之间的灰度值区间。
在一些实施例中,在获取到灰度值极小值后,控制器可以根据预设条件对灰度值极小值进行检测。预设条件为:灰度值极小值的像素点数量小于预设比例的第一灰度值的像素点数量。预设比例可以自行设定,例如2/3。因此,预设条件可以是:灰度值极小值的像素点数量,小于第一灰度值的像素点数量的2/3。
当检测到灰度值极小值满足预设条件时,可以保留该灰度值极小值。如果不满足预设条件,则不采用该灰度值极小值,而是沿着该灰度值极小值的方向,寻找下一个最近的灰度值极小值,直到寻找到满足预设条件的灰度值极小值。
图16为一些实施例中灰度值曲线的示意图。其中,点P
A为像素点数量最多的灰度值点,即第一灰度值。点P
A的左侧有两个极小值点P
B1和P
B2。其中,距离P
A最近的为P
B1,但P
B1的像素点数量明显要大于P
A的像素点数量的2/3,因此P
B1不能作为灰度值极小值,需要继续向左寻找。点P
B2符合条件,因此P
B2可以作为灰度值极小值。在点P
A的右侧有一个极小值点P
C,符合预设条件,可以作为灰度值极小值。因此,点P
B2和点P
C对应两个灰度值极小值。
在一些实施例中,在确定了第一灰度值左右两侧的灰度值极小值后,可以进一步确定这两个灰度值极小值之间的灰度值区间,并将该灰度值区间中所有像素点对应的区域确定为投射区域。
控制器可以在投射区域中获取最大内接矩形,并将该最大内接矩形作为待投影区域。
在一些实施例中,考虑到拍照环境比较复杂,在选取第一灰度值后,第一灰度值对应的灰度值区间内的像素点个数很少,导致选中的待投影区域会较小。
因此,在获取到第一灰度值对应的灰度值区间(设定为第一灰度值区间)后,控制器可以检测该第一灰度值区间内所有像素点的数量是否满足预设的第二数量条件。第二数量条件可以设定为:第一灰度值区间内所有像素点的数量小于预设比例的像素点总数量。预设比例可以是1/30。即第二数量条件可以设定为:第一灰度值区间内所有像素点的数量小于像素点总数量的1/30。
如果检测到不满足条件,则依然可以将第一灰度值区间中所有像素点对应的区域作为投射区域,并进一步获取到待投影区域。
如果检测到满足条件,则控制器可以在第一灰度值区间的两侧分别获取像素点数量最多的第二灰度值和第三灰度值。
具体的,灰度值一般为0-255。设定第一灰度值区间为(a,b)。则整个灰度值情况可以分为三个区间:(0,a)、(a,b)、(b,255)。其中,第一灰度值区间(a,b)中,像素点最多的灰度值为第一灰度值。
在第一灰度值区间左右两侧各寻找像素点数量最多的灰度值。其中,区间(0,a)中,像素点数量最多的灰度值为第二灰度值。区间(b,255)中,像素点数量最多的灰度值为第三灰度值。
分别以第二灰度值和第三灰度值为中心,确定中心左右两侧最近的灰度值极小值,并将两个灰度值极小值之间的区间确定为中心的灰度值区间。具体确定方法参考上述内容,此处不再赘述。
因此,可以获取到第二灰度值对应的第二灰度值区间,以及第三灰度值对应的第三灰度值区间。
图17为一些实施例中灰度值曲线的示意图。
其中,点P
A1为像素点数量最多的灰度值点,即第一灰度值。点P
A对应的灰度值极小值点为 点P
B1和点P
C1。通过确定点P
B1和点P
C1区间内,像素点的数量满足第二数量条件,因此可以在点P
B1左侧寻找像素点数量最多的第二灰度值P
A2,以及在点P
C1右侧寻找像素点数量最多的第三灰度值P
A2。点P
A2对应的灰度值极小值点为点P
B2和点P
C2,即点P
B2和点P
C2之间的区间为第二灰度值区间。点P
A3对应的灰度值极小值点为点P
B3和点P
C3,即点P
B3和点P
C3之间的区间为第三灰度值区间。
控制器可以统计第二灰度值区间和第三灰度值区间中像素点的数量。同时将第一灰度值区间、第二灰度值区间和第三灰度值区间三个区间中,像素点数量最多的灰度值区间确定为目标灰度值区间。目标灰度值区间对应的所有像素点形成的区域可以确定为投射区域。控制器进一步根据投射区域确定出待投影区域。
在一些实施例中,在确定投影面中的待投影区域后,可以获取待投影区域在世界坐标系下的位置信息。根据投射关系,控制器可以将世界坐标系下的位置信息转换为光机坐标系下的位置信息,得到目标位置信息。
根据该目标位置信息,控制器可以控制光机将播放内容投射至待投影区域,从而实现图像校正。
本申请实施例还提供了一种图像校正方法,应用于投影设备,如图18所示,该方法包括:
步骤1801、响应于图像校正指令,控制光机投射校正图卡至投影面,并控制相机对校正图卡进行拍摄,得到校正图像。校正图卡中包含校正特征点。
步骤1802、确定校正特征点在校正图像中的第一位置,并根据第一位置获取投射关系;投射关系指的是光机投射的播放内容,和投影面之间的映射关系。
步骤1803、根据投射关系确定投影面中的待投影区域的目标位置信息。
步骤1804、根据目标位置信息,控制光机投射播放内容至待投影区域。
具体的描述,可参考上面的具体的实施方式的描述。
图19A为本申请另一实施例投影设备实现放射眼功能的信令交互时序示意图。
在一些实施例中,本申请提供的投影设备可实现防射眼功能。为防止用户偶然进入投影设备射出激光轨迹范围内而导致的视力损害危险,在用户进入投影设备所在的预设特定非安全区域时,控制器可控制用户界面显示对应的提示信息,以提醒用户离开当前区域,控制器还可控制用户界面降低显示亮度,以防止激光对用户视力造成伤害。
在一些实施例中,投影设备被配置为儿童观影模式时,控制器将自动开启防射眼开关。
在一些实施例中,控制器接收到陀螺仪传感器发送的位置移动数据后、或接收到其它传感器所采集的异物入侵数据后,控制器将控制投影设备开启防射眼开关。
在一些实施例中,在飞行时间(TOF)传感器、摄像头设备等设备所采集数据触发预设的任一阈值条件时,控制器将控制用户界面降低显示亮度、显示提示信息、降低光机发射功率、亮度、强度,以实现对用户视力的保护。
在一些实施例中,投影设备控制器可控制校正服务向飞行时间传感器发送信令,以查询投影设备当前设备状态,然后控制器接受来自飞行时间传感器的数据反馈。
校正服务可向进程通信框架(HSP Core)发送通知算法服务启动防射眼流程信令;进程通信框架(HSP Core)将从算法库进行服务能力调用,以调取对应算法服务,例如可包括拍照检测算法、截图画面算法、以及异物检测算法等;
进程通信框架(HSP Core)基于上述算法服务返回异物检测结果至校正服务;针对返回结果,若达到预设阈值条件,控制器将控制用户界面显示提示信息、降低显示亮度,其信令时序如图 19A所示。
在一些实施例中,投影设备防射眼开关在开启状态下,用户进入预设的特定区域时,投影设备将自动降低光机发出激光强度、降低用户界面显示亮度、显示安全提示信息。投影设备对上述防射眼功能的控制,可通过以下方法实现:
控制器基于相机获取的投影画面,利用边缘检测算法识别投影设备的投影区域;在投影区域显示为矩形、或类矩形时,控制器通过预设算法获取上述矩形投影区域四个顶点的坐标值;
在实现对于投影区域内的异物检测时,可使用透视变换方法校正投影区域为矩形,计算矩形和投影截图的差值,以实现判断显示区域内是否有异物;若判断结果为存在异物,投影设备自动触发防射眼功能启动。
在实现对投影范围外一定区域的异物检测时,可将当前帧的相机内容、和上一帧的相机内容做差值,以判断投影范围外区域是否有异物进入;若判断有异物进入,投影设备自动触发防射眼功能。
于此同时,投影设备还可利用飞行时间(ToF)相机、或飞行时间传感器检测特定区域的实时深度变化;若深度值变化超过预设阈值,投影设备将自动触发防射眼功能。
在一些实施例中,投影设备基于采集的飞行时间数据、截图数据、以及相机数据分析判断是否需要开启防射眼功能。
例如,根据采集的飞行时间数据,控制器做深度差值分析;如果深度差值大于预设阈值X是,当预设阈值X实施为0时,则可判定有异物已处于投影设备的特定区域。若用户位于所述特定区域,其视力存在被激光损害风险,投影设备将自动启动防射眼功能,以降低光机发出激光强度、降低用户界面显示亮度、并显示安全提示信息。
又例如,投影设备根据已采集截图数据做加色模式(RGB)差值分析,如所述色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域;所述特定区域内若存在用户,其视力存在被激光损害风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息。
又例如,投影设备根据已采集相机数据获取投影坐标,然后根据所述投影坐标确定投影设备的投影区域,进一步在投影区域内进行加色模式(RGB)差值分析,如果色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域,所述特定区域内若存在用户,其视力存在被激光损害的风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息。
若获取的投影坐标处于扩展区域,控制器仍可在所述扩展区域进行加色模式(RGB)差值分析;如果色加模式差值大于预设阈值Y,则可判定有异物已处于投影设备的特定区域,所述特定区域内若存在用户,其视力存在被投影设备发出激光损害的风险,投影设备将自动启动防射眼功能,降低发出激光强度、降低用户界面显示亮度并显示对应的安全提示信息,如图19F所示。
图19B为本申请另一实施例投影设备实现显示画面校正功能的信令交互时序示意图。
在一些实施例中,通常情况下,投影设备可通过陀螺仪、或陀螺仪传感器对设备移动进行监测。校正服务向陀螺仪发出用于查询设备状态的信令,并接收陀螺仪反馈用于判定设备是否发生移动的信令。
在一些实施例中,投影设备的显示校正策略可配置为,在陀螺仪、飞行时间传感器同时发生变化时,投影设备优先触发梯形校正;在陀螺仪数据稳定预设时间长度后,控制器启动触发梯形校正;并且控制器还可将投影设备配置为在梯形校正进行时不响应遥控器按键发出的指令;为了 配合梯形校正的实现,投影设备将打出纯白图卡。
其中,梯形校正算法可基于双目相机构建世界坐标系下的投影面与光机坐标系转换矩阵;进一步结合光机内参计算投影画面与播放图卡的单应性,并利用该单应性实现投影画面与播放图卡间的任意形状转换。
在一些实施例中,校正服务发送用于通知算法服务启动梯形校正流程的信令至进程通信框架(HSP CORE),所述进程通信框架进一步发送服务能力调用信令至算法服务,以获取能力对应的算法;
算法服务获取执行拍照和画面算法处理服务、避障算法服务,并将其以信令携带的方式发送至进程通信框架;在一些实施例中,进程通信框架执行上述算法,并将执行结果反馈给校正服务,所述执行结果可包括拍照成功、以及避障成功。
在一些实施例中,投影设备执行上述算法、或数据传送过程中,若出现错误校正服务将控制用户界面显示出错返回提示,并控制用户界面再次打出梯形校正、自动对焦图卡。
通过自动避障算法,投影设备可识别幕布;并利用投影变化,将投影画面校正至幕布内显示,实现与幕布边沿对齐的效果。
通过自动对焦算法,投影设备可利用飞行时间(ToF)传感器获取光机与投影面距离,基于所述距离在预设的映射表中查找最佳像距,并利用图像算法评价投影画面清晰程度,以此为依据实现微调像距。
在一些实施例中,校正服务发送至进程通信框架的自动梯形校正信令可包含其他功能配置指令,例如可包含是否实现同步避障、是否入幕等控制指令。
进程通信框架发送服务能力调用信令至算法服务,使算法服务获取执行自动对焦算法,实现调节设备与幕布之间的视距;在一些实施例中,在应用自动对焦算法实现对应功能后,算法服务还可获取执行自动入幕算法,所述过程中可包含梯形校正算法。
在一些实施例中,投影设备通过执行自动入幕,算法服务可设置投影设备与幕布之间的8位置坐标;然后再次通过自动对焦算法,实现投影设备与幕布的视距调节;最终,将校正结果反馈至校正服务,并控制用户界面显示校正结果,如图19B所示。
在一些实施例中,投影设备通过自动对焦算法,利用其配置的激光测距可获得当前物距,以计算初始焦距、及搜索范围;然后投影设备驱动相机(Camera)进行拍照,并利用对应算法进行清晰度评价。
投影设备在上述搜索范围内,基于搜索算法查找可能的最佳焦距,然后重复上述拍照、清晰度评价步骤,最终通过清晰度对比找到最优焦距,完成自动对焦。
例如,步骤19C01,投影设备启动;步骤19C02,用户移动设备,投影设备自动完成校正后重新对焦;步骤19C03,控制器将检测自动对焦功能是否开启;当自动对焦功能未开启时,控制器将结束自动对焦业务;步骤19C04,当自动对焦功能开启时,投影设备将通过中间件获取飞行时间(TOF)传感器的检测距离进行计算;
步骤19C05,控制器根据获取的距离查询预设的映射表,以获取投影设备的大致焦距;步骤19C06,中间件将获取焦距设置到投影设备的光机;
步骤19C07,光机以上述焦距进行发出激光后,摄像头将执行拍照指令;步骤19C08,控制器根据获取的拍照结果、评价函数,判断投影设备对焦是否完成;如果判定结果符合预设完成条件,则控制自动对焦流程结束;步骤19C09,如果判定结果不符合预设完成条件,中间件将微调投影设备光机的焦距参数,例如可以预设步长逐渐微调焦距,并将调整的焦距参数再次设置到光 机;从而实现反复拍照、清晰度评价步骤,最终通过清晰度对比找到最优焦距完成自动对焦,如图19C所示。
在一些实施例中,本申请提供的投影设备可通过梯形校正算法实现显示校正功能。
首先基于标定算法,可获取两相机之间、相机与光机之间的两组外参,即旋转、平移矩阵;然后通过投影设备的光机播放特定棋盘格图卡,并计算投影棋盘格角点深度值,例如通过双目相机之间的平移关系、及相似三角形原理求解xyz坐标值;之后再基于所述xyz拟合出投影面、并求得其与相机坐标系的旋转关系与平移关系,具体可包括俯仰关系(Pitch)和偏航关系(Yaw)。
通过投影设备配置的陀螺仪可得到卷(Roll)参数值,以组合出完整旋转矩阵,最终计算求得世界坐标系下投影面到光机坐标系的外参。
结合上述步骤中计算获取的相机与光机的R、T值,可以得出投影面世界坐标系与光机坐标系的转换关系;结合光机内参,可以组成投影面的点到光机图卡点的单应性矩阵。
最终在投影面选择矩形,利用单应性反求光机图卡对应的坐标,该坐标就是校正坐标,将其设置到光机,即可实现梯形校正。
例如,流程如图19D所示,
步骤19D01,投影设备控制器获取照片像素点对应点的深度值,或投影点在相机坐标系下的坐标;
步骤19D02,通过深度值,中间件获取光机坐标系与相机坐标系关系;
步骤19D03,控制器计算得到投影点在光机坐标系下的坐标值;步骤19D04,基于坐标值拟合平面获取投影面与光机的夹角;
步骤19D05,根据夹角关系获取投影点在投影面的世界坐标系中的对应坐标;
步骤19D06,根据图卡在光机坐标系下的坐标与投影平面投影面对应点的坐标,可计算得到单应性矩阵。
步骤19D07,控制器基于上述已获取数据判定障碍物是否存在;
步骤19D08,障碍物存在时,在世界坐标系下的投影面上任取矩形坐标,根据单应性关系计算出光机要投射的区域;
步骤19D09,障碍物不存在时,控制器例如可获取二维码特征点;
步骤19D10,获取二维码在预制图卡的坐标;
步骤19D11,获取相机照片与图纸图卡单应性关系;
步骤19D12,将获取的障碍物坐标转换到图卡中,即可获取障碍物遮挡图卡坐标。
步骤19D13,依据障碍物图卡遮挡区域在光机坐标系下坐标,通过单应性矩阵转换得到投影面的遮挡区域坐标;
步骤19D14,在世界坐标系下投影面上任取矩形坐标,同时避开障碍物,根据单应性关系求出光机要投射的区域。
可以理解,避障算法在梯形校正算法流程选择矩形步骤时,利用算法(OpenCV)库完成异物轮廓提取,选择矩形时避开该障碍物,以实现投影避障功能。
在一些实施例中,如图19E所示,
步骤19E01,中间件获取相机拍到的二维码图卡;
步骤19E02,识别二维码特征点,获取在相机坐标系下的坐标;
步骤19E03,控制器进一步获取预置图卡在光机坐标系下的坐标;
步骤19E04,,求解相机平面与光机平面的单应性关系;
步骤19E05,控制器基于上述单应性关系,识别相机拍到的幕布四个顶点坐标;
步骤19E06,根据单应性矩阵获取投影到幕布光机要投射图卡的范围。
可以理解,在一些实施例中,入幕算法基于算法库(OpenCV),可识别最大黑色闭合矩形轮廓并提取,判断是否为16:9尺寸;投影特定图卡并使用相机拍摄照片,提取照片中多个角点用于计算投影面(幕布)与光机播放图卡的单应性,将幕布四顶点通过单应性转换至光机像素坐标系,将光机图卡转换至幕布四顶点即可完成计算比对。
长焦微投电视具有灵活移动的特点,每次位移后投影画面可能会出现失真,另外如投影面存在异物遮挡、或投影画面从幕布异常时,本申请提供的投影设备、以及基于几何校正的显示控制方法,可针对上述问题自动完成校正,包括实现自动梯形校正、自动入幕、自动避障、自动对焦、防射眼等功能的。
图20为在本申请另一些实施例的投影设备的镜头结构示意图。为了支持投影设备的自动调焦过程,如图20所示,投影设备的镜头300还可以包括光学组件310和驱动马达320。其中,光学组件310是由一个或多个透镜组成的透镜组,可以对光机200发射的光线进行折射,使光机200发出的光线能够透射到投影面400上,形成透射内容影像。
光学组件310可以包括镜筒以及设置在镜筒内的多个透镜。根据透镜位置是否能够移动,光学组件310中的透镜可以划分为移动镜片311和固定镜片312,通过改变移动镜片311的位置,调整移动镜片311和固定镜片312之间的距离,改变光学组件310整体焦距。因此,驱动马达320可以通过连接光学组件310中的移动镜片311,带动移动镜片311进行位置移动,实现自动调焦功能。
需要说明的是,本申请部分实施例中所述的调焦过程是指通过驱动马达320改变移动镜片311的位置,从而调整移动镜片311相对于固定镜片312之间的距离,即调整像面位置,因此光学组件310中镜片组合的成像原理,所述调整焦距实则为调整像距,但就光学组件310的整体结构而言,调整移动镜片311的位置等效于调节光学组件310的整体焦距调整。
当投影设备与投影面400之间相距不同距离时,需要投影设备的镜头调整不同的焦距从而在投影面400上透射清晰的图像。而在投影过程中,投影设备与投影面400的间隔距离会受用户的摆放位置的不同而需要不同的焦距。因此,为适应不同的使用场景,投影设备需要调节光学组件310的焦距。
图21为在一些实施例中距离传感器和相机结构示意图。如图21所示,投影设备还可以内置或外接相机700,相机700可以对投影设备投射的画面进行图像拍摄,以获取投影内容图像。投影设备再通过对投射内容图像进行清晰度检测,确定当前镜头焦距是否合适,并在不合适时进行焦距调整。基于相机700拍摄的投影内容图像进行自动调焦时,投影设备可以通过不断调整镜头位置并拍照,并通过对比前后位置图片的清晰度找到调焦位置,从而将光学组件中的移动镜片311调整至合适的位置。例如,控制器500可以先控制驱动马达320将移动镜片311调焦起点位置逐渐移动至调焦终点位置,并在此期间不断通过相机700获取投影内容图像。再通过对多个投影内容图像进行清晰度检测,确定清晰度最高的位置,最后控制驱动马达320将移动镜片311从调焦终端调整到清晰度最高的位置,完成自动调焦。
对于自动调焦过程,在获取自动调焦指令后,投影设备的控制器可以响应于自动调焦指令,通过距离传感器600获取间隔距离。其中,距离传感器600可以是能够检测目标距离的激光雷达、红外雷达等基于飞行时间(Time of Flight,TOF)原理的传感器设备。距离传感器600可以设置在光机200位置,包括信号的发射端和接收端。在进行间隔距离检测过程中,距离传感器600的 发射端可以向投影面400方向发射无线信号,无线信号在接触到投影面400后,会被反射回距离传感器600接收端,从而根据发射端发出信号的时间和接收端接收信号的时间计算信号飞行时间,再结合飞行速度可以得到无线信号实际飞行距离,进而计算出投影面400与光机200之间的间隔距离。
以下结合图22对本申请实施例提供的触发图像校正过程进行阐述。
在一些实施例中,图22为本申请实施例中投影设备触发图像校正的示意图。参见图22,控制器获取开机时刻的距离数据以及历史关机时刻的距离数据。根据开机时刻的距离数据以及历史关机时刻的距离数据。其中,开机时刻为投影设备当前工作过程的起始时刻,历史关机时刻为投影设备上一工作过程的终止时刻。计算得到开机时刻相对于历史关机时刻的距离变化数据。如果距离变化数据大于预设距离阈值,则控制光机200投射校正卡图像至投影面,以根据相机拍摄的校正卡图像校正光机200的待投影区域,待投影区域用于呈现播放内容。这样,通过对比当前开机时刻的距离数据与历史关机时刻的距离数据,得到开机时刻相对于历史关机时刻的距离变化数据。当距离变化数据大于预设距离阈值时,则说明用户在历史关机时刻后改变了投影设备的放置位置。进而,控制器自动触发图像校正过程。
需要说明的是,在本申请实施例中,开机时刻为投影设备当前工作过程的起始时刻,历史关机时刻为投影设备上一工作过程的终止时刻。也就是说,历史关机时刻是距离当前开机时刻之前最近一次的关机时刻。这样,通过对比当前开机时刻与距离当前开机时刻之前最近一次的关机时刻的距离数据。确保投影设备并未移动位置后的开关机不自动触发图像校正过程。
为了更精准触发图像校正过程,在计算开机时刻相对于历史关机时刻的距离变化数据之前,控制器还被配置为:在开机时刻的距离数据中解析开机距离值,以及在历史关机时刻的距离数据中解析历史关机距离值。如果开机距离值和历史关机距离值在有效距离范围内,则执行计算距离变化数据的过程;有效距离范围根据距离传感器600和投影面400的设置位置设定。
示例性的,在计算距离变化数据之前,需对开机距离值和历史关机距离值进行判断。例如,有效距离范围为0-3.5米。开机距离值为3米,历史关机距离值为2.4米。此时,开机距离值3米位于有效距离范围内,则说明开机距离值有效。同理,历史关机距离值为2.4米也位于有效距离范围内,则说明历史关机距离值有效。反之,如果出现开机距离值和历史关机距离值均未位于有效距离范围内,控制器则重新触发获取距离数据的过程。需要说明的是,控制器可进行多次获取不同时刻即多组距离数据的过程,例如,实时获取当前开机时刻的距离数据,以及历史关机时刻之前的距离数据和/或历史关机时刻最近一次完成图像校正时刻的距离数据。进而,以根据多组距离数据便于更加精确地判定投影设备是否真正的发生了位置变动。
在一些实施例中,在执行计算距离变化数据的步骤中,控制器还被配置为:将历史关机距离值与开机距离值作差,以及提取差值的绝对值生成距离差值;计算距离差值与开机距离值的比值;如果比值大于预设比例阈值,则执行后续投射校正卡至投影面400的过程。
预设比例阈值为0.05,继续参见上述示例,将开机距离值为3米与历史关机距离值为2.4米作差,即差值为0.6米。提取差值的绝对值即距离差值为0.6米,并计算距离差值与开机距离值的比值0.6/3=0.2。进而,比值0.5大于预设比例阈值0.05,说明当前开机时刻的投影设备位置相对于历史关机时刻的投影设备位置发生了改变,控制器自动触发图形校正过程。需要说明的是,本申请实施例对预设比例阈值不作数值上的限定,本领域技术人员可以根据实际需求自行确定预设加速度阈值的取值,例如0.01、0.02、0.03等,这些设计均没有超出本申请实施例的保护范围。
在一些实施例中,控制器被配置为:获取当前时刻的距离数据,并在距离数据中解析当前距 离值。如果当前距离值在有效距离范围内,则将当前时刻的距离数据对应的数据标签赋值为第一状态标识,第一状态标识用于表征所当前距离值为有效距离。如果当前距离值未在有效距离范围内,则将当前时刻的距离数据对应的数据标签赋值为第二状态标识,第二状态标识用于表征所当前距离值为无效距离。
如果当前距离值位于有效距离范围内,则赋值当前时刻的距离数据对应的数据标签为第一状态标识,如systemproperty为1。如果当前距离值未位于有效距离范围内,则赋值当前时刻的距离数据对应的数据标签为第二状态标识,如systemproperty为-2。需要说明的是,投影设备在运行期间,控制器可以实时获取距离数据,并对距离数据对应的数据标签进行赋值。本申请实施例不对数据标签的赋值时刻的限定,如投影设备可以在开机并自动完成图像校正的时刻进行数据标签赋值,还可以在开机并接收用户输入的图像校正指令的时刻进行数据标签赋值。这样,控制器可根据数据标签对应的数值直接判断距离数据是否位于有效距离范围内。
在一些实施例中,控制器在执行图像校正的过程中,投影设备的状态需保证为静止状态。因此,通过投影设备配置的加速度传感器,控制器接收来自加速度传感器的监测数据,以判定投影设备是否发生移动。在判定投影设备发生移动后,控制器可以生成图像校正指令,并开启图像校正功能。
通常,在投影设备的移动过程中,投影设备基于加速度传感器(陀螺仪传感器)中实时获取加速度数据,通过加速度数据的变化,判定投影设备是否发生移动。如果投影设备移动后,在投影设备处于静止状态时才启动图像校正功能。
参见图23,控制器被配置为:获取开机时刻的加速度数据以及历史关机时刻的加速度数据;根据开机时刻的加速度数据以及历史关机时刻的加速度数据,计算得到加速度变化数据;如果加速度变化数据小于预设加速度阈值,则执行投射校正卡至投影面400的步骤。其中,开机时刻为投影设备当前工作过程的起始时刻,历史关机时刻为投影设备上一工作过程的终止时刻。
在上述判断加速度变化数据是否小于预设加速度阈值的过程中,控制器被配置为:在加速度变化数据中解析投影设备在三个轴坐标方向上的移动角度;如果在三个轴坐标方向上的移动角度均小于预设加速度阈值,则确定加速度变化数据小于预设加速度阈值。
控制器实时监测并接收开机时刻和历史关机时刻陀螺仪传感器即加速度传感器采集的加速度数据。其中,加速度数据包括三个轴坐标(X、Y及Z轴)上的加速度数据。由于加速度数据表示三个轴坐标方向上采集的加速度数据,当其中一个轴上的加速度数据变化时,说明在该轴坐标方向上,投影设备发生位移。因此,可通过判断加速度数据三个轴方向上的数据确定投影设备是否发生移动。接着,将三个轴坐标方向的加速度数据转换为角速度数据,通过角速度数据判定投影设备在三个轴坐标方向上的移动角度。
这样,控制器通过获取开机时刻采集的加速度数据以及历史关机时刻采集的加速度数据,通过开机时刻相对于历史关机时刻的加速度变化数据,计算投影设备在三个轴坐标方向上的移动角度。如果在三个轴坐标方向上的移动角度均小于预设加速度阈值,则判定投影设备处于静止状态,即可控制光机200投射校正图卡至投影面400,以触发图像校正过程。需要说明的是,本申请实施例对预设加速度阈值不作数值上的限定,本领域技术人员可以根据实际需求自行确定预设加速度阈值的取值,例如0.1、0.2、0.3等,这些设计均没有超出本申请实施例的保护范围。
需要说明的是,由于加速度数据是由陀螺仪传感器实时采集并发送至控制器的。进而,可进行多次获取不同时刻即多组的加速度数据的过程,以便于更加精确地判定投影设备是否真正地处于静止状态。本申请实施例不对获取和判断过程的执行次数进行具体限定,可根据具体投影设备 对应的设备状态进行多次设置,且均在本申请实施例的保护范围内。
在一些实施方式中,图像校正的过程具体包括:控制器控制光机200投射校正卡至投影面400。并根据相机700拍摄校正卡图像确定投影面400中的待投影区域。其中,确定待投影区域的过程即为校正光机200的投影区域的过程。最后将播放内容投射至待投影区域。其中,校正卡包括校正白卡和校正图卡。
在一些实施例中,在根据相机700拍摄的校正卡图像校正光机200的待投影区域的步骤中,控制器还被配置为:确定校正特征点在校正卡图像中的图像位置,并根据图像位置获取投射关系,其中,投射关系为光机200投射的播放内容和投影面400之间的映射关系。最后,根据投射关系校正待投影区域。
在上述根据图像位置获取投射关系的步骤中,控制器还被配置为:根据图像位置获取校正特征点在相机坐标系下的相机坐标。将相机坐标转换为校正特征点在光机坐标系下的光机坐标。根据光机坐标获取投影面400在光机坐标系下的单应性矩阵,单应性矩阵用于表征投射关系。
在一些实施例中,光机200将校正图卡投射到投影面400上之后,控制器可以控制相机700对校正图卡进行拍摄,得到校正卡图像,以获取校正特征点的位置。
控制器可以对校正卡图像进行图像识别处理,得到校正特征点在校正图像中的第一位置。在本申请实施例中,第一位置指的是:在校正卡图像对应的图像坐标系下,校正特征点的坐标信息。
图像坐标系指的是:以图像中心为坐标原点,X,Y轴平行于图像两边的坐标系。本申请实施例中的图像坐标系可以设定如下:对于用户预先设定的投影区域,可以将该投影区域的中心点设置为原点,水平方向为X轴,垂直方向为Y轴。根据预设的投影区域可以提前设置高图像坐标系。
可以根据图像坐标系,确定出校正图像中的校正特征点的坐标信息。
本申请实施例中,相机坐标系具体为:以相机700的光点为中心,光轴为Z轴,平行于投影面为XOY面建立空间的直接坐标系。
在一些实施例中,在获取到校正特征点在相机700的相机坐标系下的坐标信息后,控制器可以将该坐标信息换转为校正特征点在光机坐标系下的坐标。
在一些实施方式中,投影设备可以通过预设的校正算法来获取第一校正坐标和第二校正坐标之间的角度关系。投射关系指的是投影设备将图像投射到投影面的投射关系,具体为投影设备的光机投射的播放内容和投影面之间的映射关系。
本申请实施例中,光机坐标系具体为:以光机的光点为中心,光轴为Z轴,平行于投影面为XOY面建立空间的直接坐标系。需要说明的是,光机坐标系和相机坐标系之间是可以相互转换的,因此校正特征点在相机坐标系中的坐标可以转换为光机坐标系中的坐标。
在一些实施例中,在获取到校正特征点在光机坐标系中的坐标后,可以确定出投影面在光机坐标系下的投影面方程。
需要说明的是,由于需要至少三个点的坐标信息才可以确定出一个平面的位置信息。因此,控制器可以获取至少校正卡图像中的至少三个校正特征点的第一位置,并根据这些校正特征点的第一位置确定出相机坐标系下的坐标信息。进一步可以将相机坐标系下的坐标信息转换为光机坐标系下的坐标信息。
在确定出至少三个校正特征点在光机坐标系下的坐标信息后,控制器可以对这些校正特征点的坐标进行拟合,从而得到投影面在光机坐标系下的单应性矩阵。这样,该单应性矩阵即转换矩阵可表示光机200投射的播放内容和投影面之间的映射关系。
在一些实施例中,在投影设备进行图像校正的过程中,相机700实时对图像进行拍摄。同时,如果检测到有人物进入到图像中,投影设备则会自动触发护眼模式。这样,为了减少光源对人眼的伤害,投影设备可以通过调节光源亮度使图像亮度发生变化,进而实现保护人眼的效果。
本申请一些实施例中提供的上述投影设备,通过控制器获取开机时刻的距离数据以及历史关机时刻的距离数据。并根据开机时刻的距离数据以及历史关机时刻的距离数据,计算得到开机时刻相对于历史关机时刻的距离变化数据。如果距离变化数据大于预设距离阈值,则控制光机200投射校正卡图像至投影面,以根据相机700拍摄的校正卡图像校正光机200的待投影区域,待投影区域用于呈现播放内容。这样,若用户将投影设备开机后已经完成校正过程,且环境没有变化的情况下再次开机时,不会再次触发校正过程。避免在用户多次对投影设备开机后,无论环境是否变化投影设备仍会自动触发校正过程的情况,提高用户的使用体验。
在一些实施例中,本申请还提供了一种触发校正方法,应用于投影设备,投影设备包括距离传感器600、光机200、相机700以及控制器;方法具体包括以下步骤:获取开机时刻的距离数据以及历史关机时刻的距离数据;根据开机时刻的距离数据以及历史关机时刻的距离数据,计算得到开机时刻相对于历史关机时刻的距离变化数据;如果距离变化数据大于预设距离阈值,则控制光机投射校正卡图像至投影面,以根据相机拍摄的校正卡图像校正光机的待投影区域,待投影区域用于呈现播放内容。
在一些实施例中,上述方法包括:在开机时刻的距离数据中解析开机距离值,以及在历史关机时刻的距离数据中解析历史关机距离值;如果开机距离值和历史关机距离值在有效距离范围内,则执行计算距离变化数据的过程;有效距离范围根据距离传感器和投影面的设置位置设定。
在一些实施例中,上述方法包括:在执行计算距离变化数据的步骤中,将历史关机距离值与开机距离值作差,以及提取差值的绝对值生成距离差值;计算距离差值与开机距离值的比值;如果比值大于预设比例阈值,则执行投射校正卡至投影面的过程。
在一些实施例中,上述方法包括:获取当前时刻的距离数据;在距离数据中解析当前距离值;如果当前距离值在有效距离范围内,则将当前时刻的距离数据对应的数据标签赋值为第一状态标识,第一状态标识用于表征所当前距离值为有效距离;如果当前距离值未在有效距离范围内,则将当前时刻的距离数据对应的数据标签赋值为第二状态标识,第二状态标识用于表征所当前距离值为无效距离。
在一些实施例中,还包括加速度传感器,加速度传感器被配置为采集加速度数据;方法包括:在控制光机投射校正卡至投影面的步骤之前,获取开机时刻的加速度数据以及历史关机时刻的加速度数据;根据开机时刻的加速度数据以及历史关机时刻的加速度数据,计算得到加速度变化数据;如果加速度变化数据小于预设加速度阈值,则执行投射校正卡至投影面的步骤。
在一些实施例中,上述方法包括:在加速度变化数据中解析投影设备在三个轴坐标方向上的移动角度;如果在三个轴坐标方向上的移动角度均小于预设加速度阈值,则确定加速度变化数据小于预设加速度阈值。
在一些实施例中,校正卡中包含校正特征点;上述方法包括:在根据相机拍摄的校正卡图像校正光机的待投影区域的步骤中,确定校正特征点在校正卡图像中的图像位置,并根据图像位置获取投射关系,其中,投射关系为光机投射的播放内容和投影面之间的映射关系;根据投射关系校正待投影区域。
在一些实施例中,上述方法包括:在根据图像位置获取投射关系的步骤中,根据图像位置获取校正特征点在相机坐标系下的相机坐标;将相机坐标转换为校正特征点在光机坐标系下的光机 坐标;根据光机坐标获取投影面在光机坐标系下的单应性矩阵,单应性矩阵用于表征投射关系。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求书来限制。
Claims (27)
- 一种投影设备,包括:光机,被配置为投射播放内容至投影面;相机,被配置为拍摄所述投影面中显示的图像;控制器,被配置为:响应于图像校正指令,控制所述光机投射校正图卡至所述投影面,并控制所述相机对所述校正图卡进行拍摄,得到校正图像;所述校正图卡中包含校正特征点;确定所述校正特征点在所述校正图像中的第一位置,并根据所述第一位置获取投射关系;所述投射关系为所述光机投射的播放内容,和所述投影面之间的映射关系;根据所述投射关系确定所述投影面中待投影区域的目标位置信息;根据所述目标位置信息,控制所述光机投射播放内容至所述待投影区域。
- 根据权利要求1所述的投影设备,所述控制器被配置为:根据所述第一位置获取所述校正特征点在相机坐标系下的第一坐标;将所述第一坐标转换为所述校正特征点在光机坐标系下的第二坐标;根据所述第二坐标获取所述投影面在光机坐标系下的投影面方程;根据所述投影面方程获取所述光机坐标系和世界坐标系的转换矩阵,所述转换矩阵用于表征所述投射关系。
- 根据权利要求2所述的投影设备,所述控制器被配置为:确定所述投影面在所述世界坐标系下的第一单位法向量;根据所述投影面方程获取所述投影面在所述光机坐标系下的第二单位法向量;根据所述第一单位法向量和所述第二单位法向量获取所述光机坐标系和所述世界坐标系的转换矩阵。
- 根据权利要求2所述的投影设备,所述控制器被配置为:获取所述校正图像中的至少三个校正特征点的第一位置;在执行根据所述第二坐标获取所述投影面在光机坐标系下的投影面方程的步骤中,将所述至少三个校正特征点的第二坐标进行拟合,得到所述投影面在光机坐标系下的投影面方程。
- 根据权利要求4所述的投影设备,所述校正图卡包括:第一校正区域、第二校正区域和第三校正区域;其中,所述第一校正区域位于所述校正图卡的中间区域,所述第三校正区域位于所述校正图卡的边缘区域,所述第二校正区域位于所述第一校正区域和所述第三校正区域之间;所述控制器被配置为:对所述校正图像中的所述第一校正区域进行识别,得到所述第一校正区域中至少三个校正特征点的第一位置;基于未识别到所述第一校正区域,对所述校正图像中的所述第二校正区域进行识别,得到所述第二校正区域中至少三个校正特征点的第一位置;基于未识别到所述第二校正区域,对所述校正图像中的所述第三校正区域进行识别,得到所述第三校正区域中至少三个校正特征点的第一位置。
- 根据权利要求3所述的投影设备,所述控制器被配置为:确定所述投影面中的待投影区域;获取所述待投影区域在所述世界坐标系下的位置信息,并根据投射关系转换为所述光机坐标系下的位置信息,所述光机坐标系下的位置信息即为所述目标位置信息。
- 根据权利要求6所述的投影设备,所述控制器还被配置为:控制所述光机投射避障图卡至所述投影面,并控制所述相机对所述避障图卡进行拍摄,得到避障图像;所述控制器进一步被配置为:对避障图像进行灰度处理,并统计每个灰度值对应的像素点数量;以像素点数量最多的第一灰度值为中心,在左右两侧各寻找最近的灰度值极小值;判断所述灰度值极小值是否满足预设条件;若是,则保留;若否,则继续寻找下一个最近的灰度值极小值;确定左右两侧的两个灰度值极小值之间的第一灰度值区间,并将所述第一灰度值区间对应的所有像素点的区域确定为投射区域;获取所述投射区域中的最大内接矩形,并作为待投影区域。
- 根据权利要求7所述的投影设备,所述控制器被配置为:在对避障图像进行灰度处理前,获取所述避障图像中所有像素点的HSV值;根据每个像素点的HSV值统计不符合预设的HSV阈值的第一类像素点;如果所述第一类像素点的数量满足预设的第一数量条件,则将所有的第一类像素点的灰度值设置为预设数值;如果不满足条件,则不进行处理。
- 根据权利要求7所述的投影设备,所述控制器被配置为:判断所述第一灰度值区间对应的所有像素点的数量是否满足预设的第二数量条件;若否,则将所述第一灰度值区间对应的所有像素点的区域确定为投射区域;若是,则在所述第一灰度值区间的两侧分别获取像素点数量最多的第二灰度值和第三灰度值;分别获取所述第二灰度值和所述第三灰度值对应的第二灰度值区间和第三灰度值区间;将三个灰度值区间中像素点数量最多的灰度值区间,对应的所有像素点的区域确定为投射区域。
- 根据权利要求1所述的投影设备,所述控制器还被配置为:控制器基于相机获取的投影画面,利用边缘检测算法识别投影设备的投影区域;在投影区域显示为矩形、或类矩形时,控制器通过预设算法获取上述矩形投影区域四个顶点的坐标值。
- 根据权利要求10所述的投影设备,所述控制器还被配置为:使用透视变换方法校正投影区域为矩形,计算矩形和投影截图的差值,以实现判断显示区域内是否有异物。
- 根据权利要求10所述的投影设备,所述控制器还被配置为:在实现对投影范围外一定区域的异物检测时,可将当前帧的相机内容、和上一帧的相机内容做差值,以判断投影范围外区域是否有异物进入;若判断有异物进入,投影设备自动触发防射眼功能。
- 根据权利要求10所述的投影设备,所述控制器还被配置为:利用飞行时间相机、或飞行时间传感器检测特定区域的实时深度变化;若深度值变化超过预设阈值,投影设备将自动触发防射眼功能。
- 根据权利要求10所述的投影设备,所述控制器还被配置为:基于采集的飞行时间数据、截图数据、以及相机数据分析判断是否需要开启防射眼功能。
- 根据权利要求10所述的投影设备,所述控制器还被配置为:若检测到特定对象位于预定区域,将自动启动防射眼功能,以降低光机发出激光强度、降低用户界面显示亮度、并显示安全提示信息。
- 根据权利要求10所述的投影设备,所述控制器还被配置为:通过陀螺仪、或陀螺仪传感器对设备移动进行监测;向陀螺仪发出用于查询设备状态的信令,并接收陀螺仪反馈用于判定设备是否发生移动的信令。
- 根据权利要求10所述的投影设备,所述控制器还被配置为:在陀螺仪数据稳定预设时间长度后,控制启动触发梯形校正,在梯形校正进行时不响应遥控器按键发出的指令。
- 根据权利要求10所述的投影设备,所述控制器还被配置为:通过自动避障算法识别幕布,并利用投影变化,将投影画面校正至幕布内显示,实现与幕布边沿对齐的效果。
- 根据权利要求1所述的投影设备,还包括:距离传感器,被配置为采集距离数据;控制器,被配置为:获取开机时刻的距离数据以及历史关机时刻的距离数据;根据所述开机时刻的距离数据以及历史关机时刻的距离数据,计算得到所述开机时刻相对于所述历史关机时刻的距离变化数据;如果所述距离变化数据大于预设距离阈值,则控制所述光机投射校正卡图像至所述投影面,以根据所述相机拍摄的所述校正卡图像校正所述光机的待投影区域,所述待投影区域用于呈现所述播放内容,其中,校正卡包括校正白卡和校正图卡。
- 根据权利要求19所述的投影设备,所述控制器还被配置为:在所述开机时刻的距离数据中解析开机距离值,以及在所述历史关机时刻的距离数据中解析历史关机距离值;如果所述开机距离值和所述历史关机距离值在有效距离范围内,则执行计算所述距离变化数据的过程;所述有效距离范围根据所述距离传感器和所述投影面的设置位置设定。
- 根据权利要求20所述的投影设备,所述控制器还被配置为:将所述历史关机距离值与所述开机距离值作差,以及提取差值的绝对值生成距离差值;计算所述距离差值与所述开机距离值的比值;如果所述比值大于所述预设比例阈值,则执行投射校正卡至所述投影面的过程。
- 根据权利要求20所述的投影设备,所述控制器还被配置为:获取当前时刻的距离数据;在所述距离数据中解析当前距离值;如果当前距离值在所述有效距离范围内,则将所述当前时刻的距离数据对应的数据标签赋值为第一状态标识,所述第一状态标识用于表征所当前距离值为有效距离;如果当前距离值未在所述有效距离范围内,则将所述当前时刻的距离数据对应的数据标签赋值为第二状态标识,所述第二状态标识用于表征所当前距离值为无效距离。
- 根据权利要求19所述的投影设备,还包括加速度传感器,所述加速度传感器被配置为采集加速度数据;所述控制器被配置为:在控制所述光机投射校正卡至所述投影面之前,获取开机时刻的加速度数据以及历史关机时刻的加速度数据;根据所述开机时刻的加速度数据以及历史关机时刻的加速度数据,计算得到加速度变化数据;如果所述加速度变化数据小于预设加速度阈值,则执行投射校正卡至所述投影面的步骤。
- 根据权利要求23所述的投影设备,所述控制器还被配置为:在所述加速度变化数据中解析所述投影设备在三个轴坐标方向上的移动角度;如果在三个轴坐标方向上的所述移动角度均小于所述预设加速度阈值,则确定所述加速度变化数据小于预设加速度阈值。
- 根据权利要求19所述的投影设备,所述校正卡包含校正特征点;所述控制器还被配置为:在根据所述相机拍摄的所述校正卡图像校正所述光机的待投影区域的步骤中,确定所述校正特征点在所述校正卡图像中的图像位置,并根据所述图像位置获取投射关系,其中,所述投射关系为所述光机投射的播放内容和所述投影面之间的映射关系;根据所述投射关系校正所述待投影区域。
- 根据权利要求25所述的投影设备,所述控制器还被配置为:在根据所述图像位置获取投射关系的步骤中,根据所述图像位置获取所述校正特征点在相机坐标系下的相机坐标;将所述相机坐标转换为所述校正特征点在光机坐标系下的光机坐标;根据所述光机坐标获取所述投影面在光机坐标系下的单应性矩阵,所述单应性矩阵用于表征所述投射关系。
- 一种用于投影设备的图像校正方法,所述方法包括:响应于图像校正指令,控制所述光机投射校正图卡至所述投影面,并控制所述相机对所述校正图卡进行拍摄,得到校正图像;所述校正图卡中包含校正特征点;确定所述校正特征点在所述校正图像中的第一位置,并根据所述第一位置获取投射关系;所述投射关系为所述光机投射的播放内容,和所述投影面之间的映射关系;根据所述投射关系确定所述投影面中待投影区域的目标位置信息;根据所述目标位置信息,控制所述光机投射播放内容至所述待投影区域。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280087039.4A CN118541967A (zh) | 2021-11-16 | 2022-09-29 | 一种投影设备和校正方法 |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111355866.0 | 2021-11-16 | ||
CN202111355866 | 2021-11-16 | ||
CN202210325721.4A CN114885136B (zh) | 2021-11-16 | 2022-03-29 | 投影设备和图像校正方法 |
CN202210325721.4 | 2022-03-29 | ||
CN202210570426.5 | 2022-05-24 | ||
CN202210570426.5A CN114760454A (zh) | 2022-05-24 | 2022-05-24 | 一种投影设备及触发校正方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023087947A1 true WO2023087947A1 (zh) | 2023-05-25 |
Family
ID=86396216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/122670 WO2023087947A1 (zh) | 2021-11-16 | 2022-09-29 | 一种投影设备和校正方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN118541967A (zh) |
WO (1) | WO2023087947A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116485918A (zh) * | 2023-06-25 | 2023-07-25 | 天府兴隆湖实验室 | 一种标定方法、系统及计算机可读存储介质 |
CN117102375A (zh) * | 2023-10-18 | 2023-11-24 | 沈阳欧施盾新材料科技有限公司 | 一种基于温度成像的异形件收口控制方法及设备 |
CN117412019A (zh) * | 2023-12-14 | 2024-01-16 | 深圳市欧冠微电子科技有限公司 | 空中成像及虚拟现实动态背光确定方法、装置及电子设备 |
CN117560478A (zh) * | 2024-01-12 | 2024-02-13 | 深圳市橙子数字科技有限公司 | 一种无边框幕布的投影对齐方法及装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110111262A (zh) * | 2019-03-29 | 2019-08-09 | 北京小鸟听听科技有限公司 | 一种投影仪畸变校正方法、装置和投影仪 |
CN110336987A (zh) * | 2019-04-03 | 2019-10-15 | 北京小鸟听听科技有限公司 | 一种投影仪畸变校正方法、装置和投影仪 |
CN112995625A (zh) * | 2021-02-23 | 2021-06-18 | 峰米(北京)科技有限公司 | 用于投影仪的梯形校正方法及装置 |
CN112995624A (zh) * | 2021-02-23 | 2021-06-18 | 峰米(北京)科技有限公司 | 用于投影仪的梯形误差校正方法及装置 |
CN114760454A (zh) * | 2022-05-24 | 2022-07-15 | 海信视像科技股份有限公司 | 一种投影设备及触发校正方法 |
CN114885136A (zh) * | 2021-11-16 | 2022-08-09 | 海信视像科技股份有限公司 | 投影设备和图像校正方法 |
-
2022
- 2022-09-29 CN CN202280087039.4A patent/CN118541967A/zh active Pending
- 2022-09-29 WO PCT/CN2022/122670 patent/WO2023087947A1/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110111262A (zh) * | 2019-03-29 | 2019-08-09 | 北京小鸟听听科技有限公司 | 一种投影仪畸变校正方法、装置和投影仪 |
CN110336987A (zh) * | 2019-04-03 | 2019-10-15 | 北京小鸟听听科技有限公司 | 一种投影仪畸变校正方法、装置和投影仪 |
CN112995625A (zh) * | 2021-02-23 | 2021-06-18 | 峰米(北京)科技有限公司 | 用于投影仪的梯形校正方法及装置 |
CN112995624A (zh) * | 2021-02-23 | 2021-06-18 | 峰米(北京)科技有限公司 | 用于投影仪的梯形误差校正方法及装置 |
CN114885136A (zh) * | 2021-11-16 | 2022-08-09 | 海信视像科技股份有限公司 | 投影设备和图像校正方法 |
CN114760454A (zh) * | 2022-05-24 | 2022-07-15 | 海信视像科技股份有限公司 | 一种投影设备及触发校正方法 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116485918A (zh) * | 2023-06-25 | 2023-07-25 | 天府兴隆湖实验室 | 一种标定方法、系统及计算机可读存储介质 |
CN116485918B (zh) * | 2023-06-25 | 2023-09-08 | 天府兴隆湖实验室 | 一种标定方法、系统及计算机可读存储介质 |
CN117102375A (zh) * | 2023-10-18 | 2023-11-24 | 沈阳欧施盾新材料科技有限公司 | 一种基于温度成像的异形件收口控制方法及设备 |
CN117102375B (zh) * | 2023-10-18 | 2024-01-02 | 沈阳欧施盾新材料科技有限公司 | 一种基于温度成像的异形件收口控制方法及设备 |
CN117412019A (zh) * | 2023-12-14 | 2024-01-16 | 深圳市欧冠微电子科技有限公司 | 空中成像及虚拟现实动态背光确定方法、装置及电子设备 |
CN117560478A (zh) * | 2024-01-12 | 2024-02-13 | 深圳市橙子数字科技有限公司 | 一种无边框幕布的投影对齐方法及装置 |
CN117560478B (zh) * | 2024-01-12 | 2024-04-19 | 深圳市橙子数字科技有限公司 | 一种无边框幕布的投影对齐方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN118541967A (zh) | 2024-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023088304A1 (zh) | 一种投影设备和投影区域修正方法 | |
WO2023087947A1 (zh) | 一种投影设备和校正方法 | |
US20240305754A1 (en) | Projection device and obstacle avoidance projection method | |
CN114866751B (zh) | 一种投影设备及触发校正方法 | |
CN116320335A (zh) | 一种投影设备及调整投影画面尺寸的方法 | |
CN115002433A (zh) | 投影设备及roi特征区域选取方法 | |
CN116055696A (zh) | 一种投影设备及投影方法 | |
CN115243021A (zh) | 一种投影设备及避障投影方法 | |
CN115604445A (zh) | 一种投影设备及投影避障方法 | |
WO2024066776A9 (zh) | 投影设备及投影画面处理方法 | |
JP2012181264A (ja) | 投影装置、投影方法及びプログラム | |
WO2023087960A1 (zh) | 投影设备及调焦方法 | |
CN114760454A (zh) | 一种投影设备及触发校正方法 | |
WO2023087951A1 (zh) | 一种投影设备及投影图像的显示控制方法 | |
CN115529445A (zh) | 一种投影设备及投影画质调整方法 | |
CN114885142B (zh) | 一种投影设备及调节投影亮度方法 | |
WO2023087948A1 (zh) | 一种投影设备及显示控制方法 | |
CN114928728B (zh) | 投影设备及异物检测方法 | |
US20240114117A1 (en) | Method for adjusting projection system and projection system | |
CN116095287B (zh) | 一种投影设备标定方法、标定系统及投影设备 | |
CN114928728A (zh) | 投影设备及异物检测方法 | |
CN118233608A (zh) | 一种激光投影校正方法及投影显示设备 | |
CN118075435A (zh) | 一种投影设备及指令响应方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22894485 Country of ref document: EP Kind code of ref document: A1 |