WO2024066776A9 - 投影设备及投影画面处理方法 - Google Patents

投影设备及投影画面处理方法 Download PDF

Info

Publication number
WO2024066776A9
WO2024066776A9 PCT/CN2023/113259 CN2023113259W WO2024066776A9 WO 2024066776 A9 WO2024066776 A9 WO 2024066776A9 CN 2023113259 W CN2023113259 W CN 2023113259W WO 2024066776 A9 WO2024066776 A9 WO 2024066776A9
Authority
WO
WIPO (PCT)
Prior art keywords
projection
area
image
coordinates
target
Prior art date
Application number
PCT/CN2023/113259
Other languages
English (en)
French (fr)
Other versions
WO2024066776A1 (zh
Inventor
王昊
岳国华
何营昊
吴燕丽
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202211203032.2A external-priority patent/CN115604445A/zh
Priority claimed from CN202211195978.9A external-priority patent/CN115623181A/zh
Priority claimed from CN202211212932.3A external-priority patent/CN115883803A/zh
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Publication of WO2024066776A1 publication Critical patent/WO2024066776A1/zh
Publication of WO2024066776A9 publication Critical patent/WO2024066776A9/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Definitions

  • the present application relates to the technical field of display devices, and in particular to a projection device and a projection image processing method.
  • a projection device is a display device that can project images or videos onto a screen.
  • the projection device can project laser light of a specific color onto the screen through the refraction of an optical lens assembly to form a specific image.
  • the displayed image is then presented to the user at a larger scale, improving the user's viewing experience.
  • the projection device needs to be kept at a certain distance from the screen so that the image formed on the screen can meet the focal length range of the optical lens assembly to obtain a clear image.
  • the projection effect may be affected, reducing the user experience.
  • An embodiment of the present application provides a projection device, comprising: a light emitting component, configured to project projection content onto a projection surface; a camera, configured to capture a sample image; a controller, configured to: in response to an obstacle avoidance instruction, obtain a transformation matrix, and obtain an adjustment instruction input by a user, wherein the transformation matrix is a homography matrix of coordinates between the camera and the light emitting component; the adjustment instruction includes a first obstacle target specified by the user; identifying the first obstacle target in a sample image, wherein the sample image is an image captured by the camera when the light emitting component projects a corrected image; defining a projectable area in the sample image according to the first obstacle target, wherein the projectable area is a rectangular area with a maximum preset aspect ratio accommodated by the area in the sample image other than the first obstacle target; determining a target projection area according to the projectable area and the transformation matrix, and controlling the light emitting component to project the projection content onto the target projection area.
  • An embodiment of the present application also provides a projection image processing method, which is applied to a projection device, wherein the projection device includes a light emitting component, a camera and a controller; the projection image processing method includes: in response to an obstacle avoidance instruction, obtaining a transformation matrix, and obtaining an adjustment instruction input by a user, wherein the transformation matrix is a homography matrix of coordinates between the camera and the light emitting component; the adjustment instruction includes a first obstacle target specified by the user; identifying the first obstacle target in a sampled image, wherein the sampled image is an image captured by the camera when the light emitting component projects a corrected image; defining a projectable area in the sampled image according to the first obstacle target, wherein the projectable area is a rectangular area with a maximum preset aspect ratio accommodated by the area in the sampled image other than the first obstacle target; determining a target projection area according to the projectable area and the transformation matrix, and controlling the light emitting component to project the projection content to the target projection area.
  • An embodiment of the present application also provides a projection device, including: a light emitting component, configured to project projection content onto a projection surface; a camera, configured to capture a sampled image; a controller, configured to: in response to a projection picture correction instruction, control the light emitting component to project a correction image, wherein the correction image includes a pure color image card and a feature image card; obtain a first sampling image obtained by the camera capturing the pure color image card, and a second sampling image obtained by the camera capturing the feature image card; determine a characteristic contour area according to the first sampling image, wherein the characteristic contour area is a contour area with the largest area in the first sampling image or a contour area specified by a user; extract feature points in the second sampling image according to the characteristic contour area; calculate an angle between the projection surface and the light emitting component based on the feature points, and control the light emitting component to project the projection content onto the projection surface according to the angle.
  • a projection device including: a light emitting component, configured to project projection content onto a projection
  • the embodiment of the present application also provides a projection image processing method, which is applied to a projection device, wherein the projection device includes a light emitting component, a camera and a controller; the projection image processing method includes: in response to a projection image correction instruction, controlling the light emitting component to project a correction image, wherein the correction image includes a pure color image card and a feature image card; obtaining a first sampling image obtained by the camera taking a photo of the pure color image card, and a second sampling image obtained by taking a photo of the feature image card; determining a characteristic contour area according to the first sampling image, wherein the characteristic contour area is a contour area with the largest area in the first sampling image or a contour area specified by a user; extracting feature points in the second sampling image according to the characteristic contour area; calculating the angle between the projection surface and the light emitting component based on the feature points, and controlling the light emitting component to correct the angle.
  • the component projects the projection content onto the projection surface according to the angle.
  • An embodiment of the present application also provides a projection device, including: a light emitting component, configured to project projection content onto a projection surface; a controller, configured to: obtain a screen movement instruction input by a user, the screen movement instruction including a moving direction and a moving distance; in response to the screen movement instruction, obtain the vertex coordinates of the current light emitting surface and the rotation matrix between the projection surface and the light emitting surface; calculate the current projection area according to the rotation matrix and the vertex coordinates of the current light emitting surface; calculate the vertex movement distance according to the vertex coordinates of the current projection area, the moving direction and the moving distance; calculate the target projection coordinates according to the vertex movement distance and the vertex coordinates of the current projection area; based on the rotation matrix, convert the target projection coordinates to the light emitting surface to obtain the light emitting projection coordinates, and control the light emitting component to project the projection content onto the projection surface according to the light emitting projection coordinates.
  • An embodiment of the present application also provides a projection image processing method, which is applied to a projection device, wherein the projection device includes a light emitting component and a controller; the projection image processing method includes: obtaining an image movement instruction input by a user, wherein the image movement instruction includes a moving direction and a moving distance; in response to the image movement instruction, obtaining the vertex coordinates of the current light emitting surface and the rotation matrix between the projection surface and the light emitting surface; calculating the current projection area according to the rotation matrix and the vertex coordinates of the current light emitting surface; calculating the vertex movement distance according to the vertex coordinates of the current projection area, the moving direction and the moving distance; calculating the target projection coordinates according to the vertex movement distance and the vertex coordinates of the current projection area; based on the rotation matrix, converting the target projection coordinates to the light emitting surface to obtain the light emitting projection coordinates, and controlling the light emitting component to project the projection content onto the projection surface according to the light emitting projection coordinates.
  • FIG1 is a schematic diagram of a projection state of a projection device according to an embodiment of the present application.
  • FIG2 is a schematic diagram of the structure of a projection device according to an embodiment of the present application.
  • FIG3 is a schematic diagram of a circuit structure of a projection device according to an embodiment of the present application.
  • FIG4 is a schematic diagram of an optical path of a projection device according to an embodiment of the present application.
  • FIG5 is a schematic diagram of the structure of a lens according to an embodiment of the present application.
  • FIG6 is a schematic diagram of the structure of a distance sensor and a camera according to an embodiment of the present application.
  • FIG7 is a schematic diagram of a system framework of a projection device according to an embodiment of the present application.
  • FIG8 is a schematic diagram of a projection device performing automatic obstacle avoidance according to an embodiment of the present application.
  • FIG9 is a schematic diagram of a projection device performing automatic obstacle avoidance according to a user instruction according to an embodiment of the present application.
  • FIG10 is a schematic diagram of performing obstacle avoidance according to a user instruction according to an embodiment of the present application.
  • FIG11 is a schematic diagram of a projection device projecting a pure color chart and a feature chart according to an embodiment of the present application
  • FIG. 12 is a schematic diagram of a projection device according to an embodiment of the present application dividing a projection area according to a first obstacle target;
  • FIG. 13 is a schematic diagram of a projection device inserting a picture card into a media asset data stream according to an embodiment of the present application
  • FIG. 14 is a schematic diagram of steps performed after a projection device inserts a picture card into a media asset data stream according to an embodiment of the present application
  • FIG15 is a schematic diagram of a projection device recognizing a voice command input by a user according to an embodiment of the present application
  • 16 is a schematic diagram of a projection device according to an embodiment of the present application determining whether an adjustment instruction input by a user meets a standard range
  • FIG17 is a schematic diagram of a projection device identifying a second fault target according to an embodiment of the present application.
  • FIG18 is a flowchart of a method for processing a projection image according to an embodiment of the present application.
  • FIG19 is a schematic diagram of imaging of a projection device when a projection surface is tilted according to an embodiment of the present application
  • FIG20 is a schematic diagram of imaging when there is an obstacle between the projection device and the projection surface according to an embodiment of the present application
  • FIG21 is a schematic diagram of a projection device performing image correction according to an embodiment of the present application.
  • FIG22 is a schematic diagram of a first sample image taken by a camera according to an embodiment of the present application.
  • FIG23 is a schematic diagram of a second sample image taken by a camera according to an embodiment of the present application.
  • FIG24 is a schematic diagram of extracting a candidate contour region in a second sample image according to an embodiment of the present application.
  • FIG25 is a schematic diagram of a projection of a light emitting component when a projection surface has a concave-convex situation according to an embodiment of the present application;
  • FIG26 is a schematic diagram of coordinate system transformation of a camera and a light emitting component according to an embodiment of the present application
  • FIG27 is a schematic diagram of defining a target projection area according to a projection picture according to an embodiment of the present application.
  • FIG28 is a schematic diagram of demarcating a target projection area according to an obstacle avoidance instruction according to an embodiment of the present application.
  • FIG29 is a second flowchart of the projection image processing method according to an embodiment of the present application.
  • FIG30 is a schematic diagram of the projection coordinate movement of the DMD plane according to an embodiment of the present application.
  • FIG31 is a schematic diagram of the movement of projection coordinates of a DMD plane when a projection device is projected frontally according to an embodiment of the present application
  • FIG32 is a schematic diagram of the movement of projection coordinates of the DMD plane when the projection device is side-projected according to an embodiment of the present application
  • FIG33 is a schematic diagram of executing screen movement according to an embodiment of the present application.
  • FIG34 is a schematic diagram of a process of obtaining a rotation matrix according to an embodiment of the present application.
  • FIG35 is a schematic diagram of a current projection area according to an embodiment of the present application.
  • FIG36 is a schematic diagram of a process of calculating a vertex moving distance according to an embodiment of the present application.
  • FIG37 is a schematic diagram of a process for calculating target projection coordinates according to an embodiment of the present application.
  • FIG38 is a schematic diagram of a process of determining whether a maximum projection area includes target projection coordinates according to an embodiment of the present application
  • FIG39 is a schematic diagram of edge length vectors and connection vectors according to an embodiment of the present application.
  • FIG40 is a schematic diagram of a projection screen after movement according to an embodiment of the present application.
  • FIG. 41 is a third flowchart of the projection image processing method according to an embodiment of the present application.
  • a projector is a device that can project images or videos onto a screen. It can be connected to computers, broadcasting networks, the Internet, video compact discs (VCD), digital video discs (DVD), game consoles, digital video cameras (DV) and other devices through different interfaces to play corresponding video signals. Projectors are widely used in homes, offices, schools and entertainment venues.
  • FIG. 1 is a schematic diagram of a projection state of a projection device according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of the structure of a projection device according to an embodiment of the present application.
  • the projection surface 1 is fixed at a first position, and the projection surface 1 can be a projection screen or a wall.
  • the projection device 2 is placed at a second position so that the image it projects matches the projection surface 1.
  • the projection device 2 includes a laser light source 100, an optical machine 200, and a lens 300, and the projection image is projected onto a projection medium 400.
  • the laser light source 100 provides illumination for the optical machine 200
  • the optical machine 200 modulates the light source beam and outputs it to the lens 300 for imaging, and projects it onto the projection medium 400 to form a projection image. Since the laser light source 100, the optical machine 200, and the lens 300 are used together to emit projection light to project a projection image, the laser light source 100, the optical machine 200, and the lens 300 are collectively referred to as a light output component 21.
  • the laser light source 100 includes a laser assembly and an optical lens assembly.
  • the light beam emitted by the laser assembly can pass through the optical lens assembly to provide lighting for the optical machine.
  • the optical lens assembly requires a higher level of environmental cleanliness and airtightness; while the chamber in which the laser assembly is installed can be sealed with a lower level of dustproofness to reduce the sealing cost.
  • the light emitting component of the projector may also be implemented by an LED light source.
  • the optical engine 200 may be implemented to include a blue optical engine, a green optical engine, a red optical engine, and may also include a heat dissipation system, a circuit control system, and the like.
  • FIG. 3 is a schematic diagram of a circuit structure of a projection device according to an embodiment of the present application.
  • the projection device may include a display control circuit 10, a laser light source 20, at least one laser driving component 30, and at least one brightness sensor 40, and the laser light source 20 may include at least one laser corresponding to the at least one laser driving component 30.
  • the at least one refers to one or more, and the plurality refers to two or more.
  • the projection device can achieve adaptive adjustment. For example, by setting a brightness sensor 40 in the light output path of the laser light source 20, the brightness sensor 40 can detect a first brightness value of the laser light source 20 and send the first brightness value to the display control circuit 10.
  • the display control circuit 10 can obtain the second brightness value corresponding to the driving current of each laser, and when it is determined that the difference between the second brightness value of the laser and the first brightness value of the laser is greater than the difference threshold, it is determined that the laser has a catastrophic optical damage (COD) fault; the display control circuit 10 can adjust the current control signal of the laser driving component corresponding to the laser until the difference is less than or equal to the difference threshold, thereby eliminating the COD fault of the laser.
  • the projection device can eliminate the COD fault of the laser in time, reduce the damage rate of the laser, and improve the image display effect of the projection device.
  • FIG. 4 is a schematic diagram of a light path of a projection device according to an embodiment of the present application.
  • the laser light source 20 may include a blue laser 201, a red laser 202, and a green laser 203 that are independently arranged.
  • the projection device may also be referred to as a three-color projection device.
  • the blue laser 201, the red laser 202, and the green laser 203 203 are all small lasers (Multi_chip LD, MCL for short), which are small in size and conducive to the compact arrangement of optical paths.
  • the laser light source further includes an optical component 210, which is used to combine the lasers emitted by the blue laser 201, the red laser 202, and the green laser 203, and to shape and homogenize them, and finally input a light beam that meets the incident requirements into the optical machine.
  • an optical component 210 which is used to combine the lasers emitted by the blue laser 201, the red laser 202, and the green laser 203, and to shape and homogenize them, and finally input a light beam that meets the incident requirements into the optical machine.
  • the projection device may be configured with a controller, or the projection device may be connected to a controller.
  • the controller includes at least one of a central processing unit (CPU), a video processor, an audio processor, a graphics processing unit (GPU), a random access memory (RAM), a read-only memory (ROM), a first interface to an nth interface for input/output, a communication bus, etc.
  • the projection device may be equipped with a camera, or the projection device may be connected to a camera for coordinated operation with the projection device to achieve adjustment and control of the projection process.
  • the camera may be specifically implemented as a 3D camera or a binocular camera; when the camera is implemented as a binocular camera, it specifically includes a left camera and a right camera; the binocular camera may obtain the projection screen corresponding to the projection device, that is, the image and playback content presented by the projection surface, and the image or playback content is projected by the built-in optical machine of the projection device.
  • the camera can be used to capture the image displayed on the projection surface, and can be a camera.
  • the camera can include a lens assembly, in which a photosensitive element and a lens are provided.
  • the lens refracts light through a plurality of lenses, so that the light of the image of the scene can be irradiated on the photosensitive element.
  • the photosensitive element can be selected based on the detection principle of a charge coupled device or a complementary metal oxide semiconductor according to the specifications of the camera, and converts the light signal into an electrical signal through a photosensitive material, and outputs the converted electrical signal into image data.
  • FIG. 5 is a schematic structural diagram of a lens according to an embodiment of the present application.
  • the lens 300 may further include a lens assembly 310 and a drive motor 320.
  • the lens assembly 310 is a lens group composed of one or more lenses, which can refract the light emitted by the optical machine 200 so that the light emitted by the optical machine 200 can be transmitted to the projection surface to form a transmission content image.
  • the lens assembly 310 may include a lens barrel and a plurality of lenses disposed in the lens barrel. Depending on whether the lens position can be moved, the lens in the lens assembly 310 can be divided into a movable lens 311 and a fixed lens 312. By changing the position of the movable lens 311 and adjusting the distance between the movable lens 311 and the fixed lens 312, the overall focal length of the lens assembly 310 can be changed. Therefore, the driving motor 320 can drive the movable lens 311 to move its position by connecting to the movable lens 311 in the lens assembly 310, thereby realizing an automatic focusing function.
  • the focusing process refers to changing the position of the movable lens 311 by driving the motor 320, thereby adjusting the distance between the movable lens 311 and the fixed lens 312, that is, adjusting the image plane position. Therefore, according to the imaging principle of the lens combination in the lens assembly 310, adjusting the focal length is actually adjusting the image distance.
  • adjusting the position of the movable lens 311 is equivalent to adjusting the overall focal length of the lens assembly 310.
  • the lens of the projection device 2 needs to adjust different focal lengths to project a clear image on the projection surface.
  • the distance between the projection device 2 and the projection surface 1 will require different focal lengths due to the different placement positions of the user. Therefore, in order to adapt to different usage scenarios, the projection device 2 needs to adjust the focal length of the lens assembly 310.
  • FIG. 6 is a schematic diagram of the structure of a distance sensor and a camera according to an embodiment of the present application.
  • the projection device 2 may also have a built-in or external camera 22, and the camera 22 may capture the image projected by the projection device 2 to obtain the projection content image. The projection device 2 then performs a clarity detection on the projection content image to determine whether the current lens focal length is appropriate, and adjusts the focal length if it is not appropriate.
  • the projection device 2 may continuously adjust the lens position and take pictures, and find the focusing position by comparing the clarity of the front and rear position pictures, thereby adjusting the movable lens 311 to a suitable position.
  • the controller 23 may first control the drive motor 320 to gradually move the focus starting position of the movable lens 311 to the focus end position, and continuously obtain the projection content image through the camera 22 during this period. Then, by performing a clarity detection on multiple projection content images, the position with the highest clarity is determined, and finally the drive motor 320 is controlled to adjust the movable lens 311 from the focus terminal to the position with the highest clarity, thereby completing the automatic focusing.
  • the controller can achieve automatic trapezoidal correction based on the image taken by the camera by coupling the angle between the optical and mechanical projection surfaces and the correct display of the projection image.
  • FIG. 7 is a schematic diagram of a system framework of a projection device according to an embodiment of the present application.
  • the projection device has the characteristics of a telephoto micro-projector, and its controller can control the display of the projected image through a preset algorithm to achieve functions such as automatic trapezoidal correction of the display screen, automatic screen entry, automatic obstacle avoidance, automatic focusing, and anti-eye-shooting.
  • the projection device is configured with a gyroscope sensor; when the device is moving, the gyroscope sensor can sense the position movement and actively collect movement data; the collected data is then sent to the application service layer through the system framework layer to support the application data required during user interface interaction and application interaction.
  • the collected data can also be used for data calls by the controller in the algorithm service implementation.
  • the projection device is configured with a time-of-flight sensor. After the time-of-flight sensor collects corresponding data, it is sent to the corresponding time-of-flight service of the service layer. After the above-mentioned time-of-flight service obtains the data, it sends the collected data to the application service layer through the process communication framework.
  • the data will be used for interactive use such as controller data calls, user interfaces, and program applications.
  • the projection device is configured with a camera for capturing images, which may be a binocular camera, a depth camera, a 3D camera, etc.; the camera capture data will be sent to the camera service, and then the camera service will send the captured image data to the process communication framework, and/or the projection device correction service; the projection device correction service can receive the camera capture data sent by the camera service, and the controller can call the corresponding control algorithm in the algorithm library according to the different functions to be implemented.
  • a camera for capturing images which may be a binocular camera, a depth camera, a 3D camera, etc.
  • the camera capture data will be sent to the camera service, and then the camera service will send the captured image data to the process communication framework, and/or the projection device correction service
  • the projection device correction service can receive the camera capture data sent by the camera service, and the controller can call the corresponding control algorithm in the algorithm library according to the different functions to be implemented.
  • data is interacted with the application service through a process communication framework, and then the calculation results are fed back to the correction service through the process communication framework; the correction service sends the obtained calculation results to the projection device operating system to generate control signals, and sends the control signals to the optical machine control driver to control the optical machine working conditions and realize automatic correction of the displayed image.
  • the projection device 2 can correct the projected image.
  • an association relationship between the distance, the horizontal angle, and the offset angle can be created in advance.
  • the controller 23 in the projection device 2 obtains the current distance from the light emitting component to the projection surface, and determines the angle between the light emitting component 21 and the projection surface at this moment in combination with the associated relationship to achieve projection image correction.
  • the above-mentioned angle is specifically the angle between the central axis of the light emitting component 21 and the projection surface.
  • the projection device 2 automatically completes the calibration and refocuses to detect whether the automatic focus function is turned on; when the automatic focus function is not turned on, the automatic focus service is terminated; when the automatic focus function is turned on, the projection device 2 will obtain the detection distance of the time of flight sensor through the middleware for calculation.
  • the projection device uses an automatic focusing algorithm and a laser distance measurement configured therein to obtain the current object distance to calculate the initial focal length and the search range; the projection device then drives the camera to take a picture and uses a corresponding algorithm to perform a clarity evaluation.
  • the projection device searches for the best possible focal length within the above search range based on the search algorithm, and then repeats the above steps of taking pictures and evaluating clarity, and finally finds the optimal focal length through clarity comparison to complete automatic focusing.
  • the projection device After the projection device is started, the user moves the device; the projection device automatically completes the calibration and refocuses, and the controller will detect whether the autofocus function is turned on; when the autofocus function is not turned on, the controller will end the autofocus service; when the autofocus function is turned on, the projection device will obtain the detection distance of the time-of-flight sensor through the middleware for calculation.
  • the controller queries the preset mapping table based on the acquired distance to obtain the focal length of the projection device; then the middleware sets the acquired focal length to the light-emitting component of the projection device; after the light-emitting component emits laser light at the above focal length, the camera will execute the photo command; the controller determines whether the focus of the projection device is completed based on the acquired captured image and evaluation function.
  • the control of the automatic focusing process ends; if the judgment result does not meet the preset completion conditions, the middleware will fine-tune the focal length parameters of the light-emitting component of the projection device, for example, the focal length can be gradually fine-tuned by a preset step length, and the adjusted focal length parameters will be set to the light-emitting component again; thereby achieving repeated photo taking and clarity evaluation steps, and finally finding the optimal focal length through clarity comparison to complete automatic focusing.
  • the projection device 2 when the projection device 2 projects media data onto the wall (projection surface 1), it recognizes that there is a control switch (obstacle 3) on the wall. After recognizing the control switch, the projection device 2 determines that the control switch is obstacle 3, which will affect the projection effect, and therefore activates the automatic obstacle avoidance function. After the automatic obstacle avoidance function is executed, the projection device 2 avoids the control switch (obstacle 3) and re-divides the projection area on the wall (projection surface 1) and projects the media data onto the projection area. However, it can be seen that the area of the re-divided projection area is smaller than the projection area before the re-division.
  • the projection device 2 can select multiple projection ratios according to the divided projection area to achieve the best projection effect, and the multiple projection ratios can also meet the various needs of users.
  • the automatic obstacle avoidance causes the projection area to become smaller, the richness of the ratio selection when projecting media data is directly reduced, and the optimal projection effect cannot be achieved.
  • the present application provides a projection device, including a light emitting component 21, a camera 22 and a controller 23.
  • the light emitting component 21 is used to project the projection content to the projection surface
  • the camera 22 is used to capture the sampled image.
  • the user input instructions for the obstacle information are also obtained, and the user input instructions are combined to determine whether it is necessary to avoid the obstacle in the projection area.
  • the controller 23 is configured as follows:
  • the transformation matrix is the homography matrix of the coordinates between the camera 22 and the light emitting component 21, and is used for the coordinate conversion between the camera coordinate system and the light emitting component coordinate system of the target at the same position in the projection area.
  • the camera 22 is first required to capture an image of the projection area to confirm the position of the obstacle 2. Then the controller 23 controls the light emitting component 21 to automatically avoid the obstacle according to the position of the obstacle and the projection area required by the projected media data.
  • the obstacle 3 is in the camera's coordinate system, and its coordinates are determined according to the camera's coordinate system.
  • the device 21 projects media data to the projection area
  • the coordinates of the distribution of the media data on the projection area are determined according to the light-emitting component coordinate system. Therefore, the coordinates of the position of the obstacle 3 in the projection area in the camera coordinate system need to be converted to the coordinates in the light-emitting component coordinate system.
  • the controller 23 recognizes the existence of the obstacle 3 in the projection area in the light-emitting component coordinate system to achieve automatic obstacle avoidance.
  • the calculation basis of the coordinate conversion is based on the homography matrix between the camera and the optical machine.
  • the adjustment instruction includes the first obstacle target in the projection area specified by the user. Since there may be multiple obstacle targets in the projection area, it is necessary to confirm and execute obstacle avoidance for multiple obstacle targets.
  • the first obstacle target may refer to multiple target obstacles of the same type, such as multiple control switches on a wall. It may also refer to a single target obstacle, and the controller 23 performs obstacle avoidance processing according to the actual situation of the projection area.
  • the first obstacle target is the obstacle that the user wants the projection device to avoid during operation, that is, the first obstacle target will affect the projection effect.
  • the user can input the adjustment instruction to determine the obstacles to be avoided; or the user can input the adjustment instruction immediately after the projection device is turned on to determine the obstacles to be avoided; or the user can input the adjustment instruction after starting to project the media data and find that the obstacles that were not chosen to be avoided before projection actually affect the projection effect, thereby inputting the adjustment instruction to avoid the obstacles.
  • obstacle detection is started in the projection area according to a preset program.
  • the user outputs a voice command "avoid the hook on the wall" to the projection device, and the controller 23 receives the user's command through the audio receiving device.
  • the transformation matrix for the conversion of the camera coordinate system and the light output component coordinate system is obtained to provide a calculation basis for the subsequent coordinate conversion of the obstacle.
  • the output adjustment instruction can be output by voice input, and can also be output to the projection device through a remote control device such as a remote controller, or can be input through the control interface on the projection device.
  • a remote control device such as a remote controller
  • the voice input method is very easy for the user to implement, and improves the user's sense of interaction with the projection device, thereby improving the user experience. Therefore, the main way for the user to output the adjustment instruction can be through voice output, and other methods can assist the user in outputting the adjustment instruction in specific scenarios.
  • S200 Identify a first obstacle target in a sampled image, and define a projectable area in the sampled image according to the first obstacle target.
  • the sampled image is an image captured by the camera 22 when the light emitting component 21 projects the corrected image.
  • the corrected image projected by the light emitting component 21 is to obtain image information of the entire projection area and obstacles that the user points out need to be avoided, and to establish a coordinate system for the sampled image based on the coordinate system of the light emitting component. With the assistance of the corrected image, the existence of obstacles can be more accurately identified, and then the position of the obstacles in the sampled image can be obtained through the coordination of the coordinate system.
  • the controller 23 can re-divide the projection area according to the position of the obstacle in the sampled image, and transform the divided projection area according to the homography matrix, and then control the light emitting component 21 to project media data to the divided projection area.
  • the projection device confirms that the hook is an obstacle, that is, it determines that there is an obstacle in the projection area.
  • the controller 23 controls the light output component 21 to project the correction image to the wall of the projection area, and when the correction image is projected onto the wall, it also controls the camera 22 to capture the sampling image (that is, capture the projection area).
  • the controller 23 confirms the position of the hook on the sampling image, and re-divides the new projection area according to the sampling image after removing the hook.
  • the projectable area When dividing the projectable area, the projectable area is a rectangular area with the maximum preset aspect ratio that can be accommodated in the sampled image excluding the first obstacle target. Since the main reason for using a projection device is to obtain a media data playback effect with a larger aspect ratio, it is necessary to ensure that the projectable area has a larger aspect ratio as much as possible. In addition, currently only the first obstacle target is detected, and there may be other obstacle targets in the projection area, but the current user has not output relevant instructions. In order to ensure that the media data can be projected smoothly, it is also necessary to reserve a larger projection area to cope with the continued division of the projection area.
  • the corrected image in the embodiment of the present application includes a pure color image card and a feature image card.
  • the controller 23 also performs the following operations after responding to the obstacle avoidance instruction:
  • the camera is controlled to take pictures of the corrected image displayed on the projection surface when the light emitting component projects the pure color image card and the characteristic image card respectively, so as to obtain the first sampling image and the second sampling image.
  • the pure color card included in the correction image generally adopts a white card
  • the feature card includes a feature image on the screen, and the specific form of the feature image is not limited.
  • the controller cannot directly judge the obstacle by the projection effect. Therefore, the camera is required to record the state of the projection area when the white card and the characteristic card are projected, and generate the first sampling image and the second sampling image respectively.
  • the controller obtains the relevant information of the obstacle by comparing the first sampling image and the second sampling image.
  • the first sampled image is generated by the projection of a white card by the light emitting component.
  • the first sampled image will not show any content, and may show a white image that is not quite the same color as the wall. However, if there is an obstacle, the camera will still capture the obstacle on the wall. However, the content on the first sampled image alone is not enough to determine the location information of the obstacle on the wall.
  • the controller determines that there is an obstacle at this time, it will perform the obstacle avoidance function for no reason, resulting in a poor projection effect. Therefore, it is necessary to project the feature card again to further confirm the obstacle.
  • the feature image on the feature card is also conducive to the establishment of the coordinate system. The establishment of the coordinate system makes it easier to describe the location of the obstacle, and it is also easier to rebuild the coordinate system after removing the obstacle and convert it to the coordinate system of the light-emitting component.
  • the light emitting component needs a certain conversion time when switching between projecting media data and projecting pure color image cards and feature image cards, which will increase the user's waiting time and affect the user's experience.
  • the controller 23 performs the following steps:
  • the projection device needs to execute the obstacle avoidance function immediately after receiving the obstacle avoidance command.
  • the obstacle avoidance function can be turned on when the media data has been projected, or it can be turned on when the media data has not been projected. Specifically, it can be determined by detecting the projection state of the optical component to determine which projection situation the current projection situation belongs to. If the projection of the girl data has not started, the pure color image card and the feature image card can be projected in sequence, and there is no situation where it takes too long to switch from the media data to the pure color image card and the feature image card.
  • a pure color image card and a feature image card are inserted into the media data stream to be projected.
  • the feature image card of the solid color image card can be inserted into the media data stream so that the process of projecting the solid color image card and the feature image card conforms to the playback time of the media data and is projected naturally without changing the projection source, so the user does not need to wait too long.
  • the shooting time is calculated according to the insertion positions of the pure color image card and the feature image card in the media data stream, and the camera is controlled to shoot according to the shooting time to obtain the first sampling image and the second sampling image.
  • the camera shooting time can be calculated.
  • the camera executes the shooting function to obtain the first sample image and the second sample image.
  • the projection device is playing a ten-minute video.
  • the controller receives an obstacle avoidance instruction input by the user, and records the obstacle avoidance instruction time as five minutes, and adds a pure color card to the fifth minute and tenth second of the video data stream, and adds a feature card to the fifth minute and fifteenth second.
  • the controller also calculates that the camera starts the first shot after ten seconds and the second shot after fifteen seconds based on the time when the card is inserted in the data stream.
  • the light-emitting component projects a pure color card and the camera takes a photo to obtain a first sampled image.
  • the light-emitting component projects a feature card and the camera takes a photo to obtain a second sampled image.
  • the switching of the projection source is avoided, making the process of obtaining the sampled image more natural, reducing the waiting time of the user, and the execution efficiency of the obstacle avoidance function is also higher.
  • the first obstacle target can be identified and the obstacle avoidance process can be performed.
  • a first obstacle target is identified in the first sampling image, and a transformation matrix is established according to the shape of the feature map in the second sampling image.
  • the position and coordinates of the first obstacle target can be identified.
  • the first sampling image there is no interference from the feature points on the feature image, and the outline of the first obstacle target can be completely identified, and its coordinates can be obtained.
  • the feature points on the second sampling image have the characteristics of symmetry and uniform distribution, so it is conducive to establishing a coordinate system to obtain coordinates.
  • the transformation matrix of the camera coordinate system and the light output component coordinate system can be further established. In the process of establishing the transformation matrix using the feature points on the second image, the controller 23 performs the following steps:
  • the color values of the pixels in the second sampled image are traversed, and feature points are identified in the second sampled image according to the color values.
  • An image is composed of several pixels, and its specific color is also affected by the color values of multiple pixels.
  • the color of the feature point is different from the pure color card in the first sampled image. Most pure color cards are white, so the feature points in the feature card can be in a color that has a significant contrast with white.
  • the feature point can be identified by identifying the color value of the pixel in the second sampled image, that is, by identifying the color value different from white.
  • Multiple feature points can be used to obtain feature graphs according to different distribution conditions, and multiple feature points are conducive to establishing a coordinate system suitable for the current feature graph, and further conducive to establishing a homography matrix for converting the camera coordinate system and the light output component coordinate system.
  • the homography matrix can be obtained by referring to the following formula in the process of establishing the homography matrix.
  • the coordinates of the feature points in the second sampling image taken by the camera can be expressed by a matrix:
  • (x, y, 1) is the coordinate of the feature point in the camera coordinate system.
  • (x1, y1, 1) is the coordinate of the feature point in the light emitting component coordinate system.
  • the coordinates in the camera coordinate system are converted to the coordinates in the light-emitting component coordinate system, which can be regarded as:
  • H is the homography matrix, which can be calculated through the known camera coordinates and light-emitting component coordinates.
  • the calculation process is a general method, so no further explanation is given.
  • the homography matrix H After calculating the homography matrix H, store H in the storage space to form a transformation matrix for the camera-light output component coordinate transformation, waiting to be called.
  • the specific values contained in the homography matrix may vary due to the different ways of establishing the feature map coordinate system, but it has no specific effect on the camera-light output component coordinate transformation.
  • the actual content of the homography matrix is the conversion rule from camera coordinates to light output component coordinates, and the corresponding conversion can be achieved.
  • S300 determining a projection area according to the projectable area and the transformation matrix, and controlling a light emitting component to project projection content to a target projection area.
  • the projection content is the media data.
  • the projection device When dividing the projectable area, the projection device first divides it on the sampled image taken by the camera.
  • the projection of media data also requires the light-emitting component to perform related actions. Therefore, it is necessary to convert the coordinates of the projectable area that have been re-divided on the sampled image into the coordinates of the projectable area suitable for projection by the light-emitting component.
  • the coordinates on the sampled image are converted one by one into the coordinates in the light-emitting component coordinate system by calling the transformation matrix to form the projection area.
  • the projection device redivides the projectable area, it immediately calls the acquired transformation matrix, that is, the homography matrix for coordinate transformation.
  • the controller converts the coordinates of the projectable area redivided on the sampled image from the camera coordinate system to the light emitting component coordinate system.
  • the controller projects the media data to the projectable area (e.g., a wall) through the light emitting component.
  • the controller When dividing the projection area according to the first obstacle target, since a command from the user to specify the first obstacle target is received, the first obstacle target specified by the user needs to be identified. As shown in FIG15 , the controller performs corresponding actions according to the identification result of the first obstacle target:
  • the recognition model is called according to the type of the first obstacle target specified by the user, and the sampled image is input into the recognition model.
  • the recognition model is a neural network model trained based on sample image data.
  • the controller parses the first obstacle target input by the user and determines the type of the first obstacle target through keywords, key words and other information. Then, the recognition model is called based on the type to determine whether the first obstacle target exists in the sampled image.
  • the determination of the first obstacle target may not be an accurate value, but may be reflected in the form of probability such as similarity. For example, after the recognition model receives the sampled image, it determines the classification probability of the first obstacle target being included in the sampled image.
  • a step of dividing the projectable area according to the position of the first obstacle target in the sampled image is performed.
  • the first obstacle target input by the user is "hook on the wall”.
  • the controller identifies the first obstacle target in the order of parsing the keywords "on the wall” and "hook” - calling the "household accessories” recognition model, and the probability of the sampled image containing the hook is 90%. It can be determined that the first obstacle target - the hook - exists in the sampled image.
  • the projectable area is re-divided according to the coordinates of the hook.
  • first prompt information is generated, and the light emitting component is controlled to project the first prompt information.
  • the recognition model obtains a 10% probability that the sampled image contains a hook, determines that the first obstacle target does not exist in the sampled image, and generates a first prompt message for reminding the user that the first obstacle target is not recognized in the current projection area.
  • the first prompt message can be presented in the form of projection or played in the form of audio. After receiving the prompt message, the user can re-enter the command to specify the first obstacle target, and the controller will re-perform obstacle avoidance processing.
  • the adjustment instruction input by the user may include projection parameters in addition to the first obstacle target.
  • the projection parameters are used to limit the target projection area.
  • the controller receives the adjustment instruction including the projection parameters, it performs the following steps according to the projection parameters:
  • the projectable area is fitted into the effective projection range.
  • the projection area includes the target projection area required for projecting media data and some reserved blank areas, which are the optimal areas for the current situation calculated by the controller.
  • some reserved blank areas which are the optimal areas for the current situation calculated by the controller.
  • users will give projection parameters according to actual needs to adjust the actual projection area.
  • the projection parameters are extracted from the adjustment instructions, and the target projection area is delineated within the effective projection range according to the projection parameters.
  • the projection parameters can be parameters that directly control the screen ratio, such as "playing media data in the form of 2 meters in length and 1 meter in width".
  • the controller further adjusts the actual projection range within the effective projection range according to the projection parameters to meet the user's needs and obtains the target projection area. In some cases, the projection parameters entered by the user may be unclear or exceed the effective projection range.
  • the controller needs to determine whether the projection parameters meet the effective projection range and perform the following steps:
  • the target projection area is delineated in the effective projection range according to the specified screen size.
  • border size is smaller than the specified screen size, second prompt information is generated, and the light emitting component is controlled to project the second prompt information.
  • the boundary size of the effective projection range is the maximum size of the projection area. Exceeding the boundary size will affect the resolution and proportion adjustment of the projection, resulting in distortion of the projected media data.
  • the projection parameters input by the naked eye and their own needs may exceed the boundary size. Therefore, when generating the target projection area according to the projection parameters input by the user, the projection parameters and the boundary size are compared to determine whether a suitable target projection area can be divided according to the user's needs.
  • the adjustment instructions input by the user include "the width of the projected media data is 2.56 meters and the height is 1.44 meters”.
  • the controller judges the projection parameters according to the boundary size of the effective projection range. If it is found that the distance projection parameters are consistent with the effective projection range, the target projection area is determined according to the projection parameters.
  • the adjustment instruction input by the user includes "the width of the projected media data is 3 meters and the height is 2 meters”.
  • the controller judges the boundary size according to the effective projection range and finds that the projection parameters have exceeded the effective projection range.
  • the light output component projects a second prompt message "the target projection area division failed, and the projection parameters are out of range" to the wall. After seeing the out-of-range prompt, the user can re-enter the projection parameters to adjust them again.
  • the adjustment instructions input by the user may also include a specified spacing distance, for example, "project the media data at a position five inches from the right side of the wardrobe” or "project the media data at a position three inches from the ground”.
  • the specified spacing distance means that a blank area is reserved within the effective projection range, thereby further reducing the target projection area.
  • the controller needs to add the specified spacing distance to the width and height of the projected media data to determine whether it exceeds the boundary size, and then compare it with the boundary size.
  • the extreme size includes the minimum width and minimum height.
  • the minimum width is the sum of the specified screen width in the specified screen size and the specified horizontal distance in the interval distance; the minimum height is the sum of the specified screen height in the specified screen size and the specified vertical distance in the specified interval distance.
  • the extreme size is equivalent to the minimum width and minimum height that the effective projection area should have.
  • the target projection area is delineated within the effective projection range according to the specified interval distance and the specified screen size.
  • the controller calculates the minimum width as 4.56 meters based on the projection parameters of "the width of the projected media data is 2.56 meters” and "the horizontal distance from the wardrobe is 2 meters” and the specified interval distance; the controller calculates the minimum height as 1.94 meters based on the projection parameters of "the height of the projected media data is 1.44 meters” and "0.5 meters from the ground” and the specified interval distance.
  • the current effective projection range is 4 meters wide and 2 meters high. After the controller calls the parameters of the effective projection range, it determines that the width exceeds the range and the height meets the range.
  • the boundary size of the effective projection range does not meet the adjustment instructions entered by the user, and the light-emitting component projects "Failed to adjust the projection range according to the adjustment instructions" to prompt the user to re-enter the adjustment instructions.
  • the user will also play the message "The current maximum size is 4 meters in width and 2 meters in height” to prompt the user.
  • the effective projection range is 5 meters wide and 2 meters high.
  • the controller determines that the width and height are within the range, and the boundary size of the effective projection range meets the adjustment instructions input by the user, and divides the target projection area according to the adjustment instructions.
  • the projected media data is more in line with the needs of users, and the human-computer interaction process between the user and the projection device is increased, which is conducive to improving user satisfaction.
  • the image quality after the projection device projects the media data is guaranteed by judging the width and height. If it does not meet the standard range, the media data will not be projected, and the media data will be projected only if it meets the standard range.
  • the projection device can project the media to the target projection area in the absence of other obstacle targets. If there are other obstacles, as shown in Figure 17, the controller will also interact with the user to confirm whether other obstacles need to be avoided and perform subsequent actions according to the instructions input by the user:
  • a second obstacle target is identified in the projectable area.
  • the projection area at this time is the projection area re-divided by the controller according to the first obstacle target.
  • the second obstacle target is the target other than the first obstacle target in the sampled image.
  • the category of the second obstacle target may be different from the first obstacle target, so the controller needs to continue to confirm with the user whether to execute the obstacle avoidance function.
  • the second obstacle target can be autonomously identified by the controller or input by the user. The process of identifying the second obstacle target is the same as the process in the above embodiment, and will not be repeated here.
  • fourth prompt information is generated, and/or a query instruction is generated.
  • the controller After confirming the existence of the second obstacle target, the controller generates audio/projection information for prompting the user of the existence of the second obstacle target. At the same time, it also sends a query instruction to the user to confirm whether obstacle avoidance processing is required.
  • a confirmation instruction input by the user based on the fourth prompt information and/or the inquiry instruction is obtained, and the projection area is redefined according to the confirmation instruction.
  • the confirmation instruction is an instruction input by the user to the controller whether to perform obstacle avoidance processing. After receiving the user's instruction to confirm obstacle avoidance, the controller re-divides the current projection area into new projection areas according to the second obstacle target according to the process of the above embodiment.
  • the controller generates a projection area after avoiding the hooks on the wall in combination with the adjustment instructions input by the user.
  • the control switch on the wall is identified, and an audio message "Do you need to avoid the switch” is immediately sent to the user.
  • the user feeds back to the controller "Need to avoid the switch” in the form of voice input.
  • the controller divides the new projection area according to the division method of the projection area, further combines user needs and projection standards to form a target projection area, and projects the media data to the target projection area.
  • the projection quality in the target projection area can be guaranteed. And through interaction with the user, the user experience is improved. It is understandable that the user can also input the command "no need to avoid the switch" through voice, and the controller can directly project the media data to the target projection area.
  • the embodiment of the present application further provides a projection picture processing method, which is applied to the above-mentioned projection device, and the projection device includes a light output component, a camera and a controller.
  • the projection picture processing method includes:
  • the transformation matrix is the homography matrix of the coordinates between the camera and the light-emitting component; and the adjustment instruction includes the first obstacle target specified by the user.
  • the sampled image is an image captured by the camera when the light-emitting component projects the corrected image.
  • the projectable area is a rectangular area with a maximum preset aspect ratio that is contained in the area other than the first obstacle target in the sampled image.
  • S400 determining a target projection area according to the projectable area and the transformation matrix, and controlling the light emitting component to project the projection content to the target projection area.
  • the projection device When the projection device recognizes that there is an obstacle in the projection area, it also obtains the user's input instruction for specifying the first obstacle target to confirm whether obstacle avoidance is required during the projection process. By shooting a sampled image, it is confirmed whether the first obstacle target exists and its position when it exists. Among them, the coordinates of the sampled image are converted to the light-emitting component coordinate system through the homography matrix of the coordinates between the camera and the light-emitting component. When projecting media data, the projection device re-divides the projection area according to the position of the first obstacle target, and projects the media data to the re-divided projection area. When the projection device recognizes an obstacle, it combines the user's obstacle avoidance instruction and does not execute the automatic obstacle avoidance function when the obstacle does not affect the projection effect, thereby ensuring the projection effect and improving the user experience.
  • the projection device is usually designed with automatic obstacle avoidance and automatic correction functions.
  • the projection device needs to be able to automatically detect obstacles during projection to avoid obstacle projection, and use the automatic correction function to re-correct the projection image.
  • the automatic correction function is to use the projection feature card to project the feature point without determining the spatial position of the feature point, directly take pictures with the camera 22 to identify the feature point, and then three-dimensionally reconstruct the fitting projection surface to achieve automatic correction.
  • the projection is projected onto multiple areas. If feature points are recognized in multiple areas, a fitting plane error will occur, causing the automatically corrected shape to no longer be a rectangle, resulting in an erroneous correction.
  • the embodiment of the present application also provides a projection device, as shown in FIG21, the projection device may include a light output component 21, a camera 22 and a controller 23.
  • the light output component 21 is used to project the projection content to the projection surface.
  • the camera 22 is used to capture the sample image.
  • the controller 23 is configured as follows:
  • the projection device can obtain the projection picture correction instruction input by the user, and control the light-emitting component to project the correction image according to the projection picture correction instruction.
  • the projection picture correction instruction can be issued by the user through a button on the control device (remote control, etc.) of the projection device, or through a mobile terminal (smartphone, portable computer, etc.) that establishes a communication connection with the projection device.
  • the corrected image includes a pure color card and a feature card.
  • the projection device when the corrected image is a pure color card, the projection device will project a pure color projection picture onto the projection surface through the light emitting component 21.
  • the light emitting component 21 projects the pure color card onto the projection surface, part of the light in the pure color card will be irradiated on the obstacle between the light emitting component 21 and the projection surface, and a shadow area will appear at the corresponding position of the projection surface due to the occlusion of the obstacle.
  • the controller 23 can identify the shadow area in the pure color card to identify the position of the obstacle, and perform subsequent obstacle avoidance functions according to the position of the obstacle.
  • the pure color chart should be light-colored, such as light yellow, light blue, white, gray, etc., and light color is defined as a color less than or equal to 1/12 of the standard depth of the dye color.
  • the embodiments of the present application do not impose any other restrictions on the color of the pure color chart.
  • the light emitting component 21 projects a projection picture with a plurality of characteristic points onto the projection surface.
  • the characteristic points represent the characteristics of a preset area range of the projection picture, and are used to perform an automatic correction function on the projection picture later.
  • S200 Acquire a first sampled image obtained by photographing a pure color chart and a second sampled image obtained by photographing a feature chart.
  • the controller 23 can also control the camera 22 to shoot the pure color image card and the feature image card to obtain a first sampling image obtained by shooting the pure color image card and a second sampling image obtained by shooting the feature image card.
  • the camera 22 of the projection device can be built-in or externally installed, and the camera 22 can shoot the projection screen projected by the projection device through the light output component 21 to obtain a sampling image. After shooting by the camera 22, the clarity of the sampling image shot by the camera 22 can also be detected to determine whether the focal length of the projection device is appropriate.
  • the clarity of the sampled image is detected to be low, adjust the focal length of the projection device, and reshoot the sampled image by the camera 22, compare the clarity of the sampled image in the order of shooting time, and determine the focal length parameters of the projection device when the clarity of the sampled image is the highest.
  • the camera 22 when determining the focal length parameter of the projection device, can also be switched to a continuous shooting mode during the process of adjusting the focal length parameter, and a removable watermark is added to the captured sample image, and the watermark content is the focal length parameter.
  • the focal length parameter of the projection device is determined when the clarity of the sample image is the highest.
  • the first sampling image acquired by the controller 23 is shown in FIG. 22 , and a rectangular obstacle 31 and a circular obstacle 32 exist in the first sampling image.
  • S300 Determine a characteristic contour area according to a first sampling image.
  • the characteristic contour area is the contour area with the largest area in the first sampled image or the contour area specified by the user.
  • the controller 23 can determine the characteristic contour area according to the positions of the obstacles 31 and 32 in the first sampled image to ensure that the projection image projected by the projection device avoids the above obstacles 31 and 32, so that the user can see the projection image that is not blocked by the obstacles.
  • the characteristic contour area may be the contour area with the largest area in the first sampling screen or the contour area specified by the user.
  • the controller 23 may determine to select one of the above contour areas as the characteristic contour area according to the setting state of the obstacle avoidance switch of the projection device. Because the obstacles between the light emitting component 21 and the projection surface 1 will form a shadow area on the projected image, and the shadow area will split the original projection screen into multiple contour areas without shadow areas. At this time, the controller 23 can extract all the contour areas in the first sampling area and detect the setting state of the obstacle avoidance switch of the projection device.
  • the setting state of the obstacle avoidance switch includes on and off. The setting state can be manually set on and off by the user using the projection device, or the projection device can automatically switch to the on state when it recognizes that there is an obstacle in the projection screen.
  • the projection device When the projection device is in the obstacle avoidance state, the projection picture needs to be projected to avoid obstacles.
  • the controller 23 will traverse the area of each contour area in the first sampling image, and filter out the candidate contour area.
  • the candidate contour area is used for the user to specify as a feature contour area.
  • other parameters for screening can also be set. For example, in order to enable the user to view the projection picture more clearly, the controller 23 sets the distance between the contour area and the user as a screening parameter.
  • the shape of the contour area is an irregular shape, and the projection needs to be projected into a rectangular area. At this time, the controller 23 also needs to set the maximum rectangular area in each contour area, and use the maximum rectangular area as the screening parameter.
  • the controller 23 can also define the contour area according to the color value of the pixels in the first sampled image. Since the color of the pure color image card is light, and the shadow area after being blocked by obstacles is usually black, the two have different colors. Therefore, the controller 23 can also set a color difference threshold, and traverse the color values of the pixels in the first sampled image, obtain the pixels whose color difference values of the adjacent pixels are greater than or equal to the color difference threshold, and identify the boundary graphics according to the pixels whose color difference values of the adjacent pixels are greater than or equal to the color difference threshold.
  • the boundary graphics are the shape graphics of the obstacles between the light emitting component 21 and the projection surface 1.
  • the controller 23 After identifying the boundary pattern, the controller 23 will delineate the contour area according to the edge of the first sampled image. In this way, for the inside of the shadow area formed by the obstacle occlusion, the pixels are all black, the color difference value of adjacent pixels is small, and it will not reach the color difference threshold, while the pixels of the contour part of the shadow area are black, and the pixels outside the contour are the color of the pure color card, which has a large color difference with black, so the shadow area formed by the obstacle occlusion can be accurately identified and the contour area can be accurately delineated.
  • the controller 23 may also input the first sampled image captured by the camera 22 into the recognition model.
  • the recognition model is a neural network model obtained by training the sample image, and the recognition model can be obtained by training a large number of sampled images with obstacles and sampled images without obstacles until convergence. After the controller 23 inputs the first sampled image into the recognition model, the recognition result output by the recognition model will be obtained.
  • the recognition result is the classification probability that the first sampled image contains an obstacle target, and whether the first sampled image includes an obstacle target can be determined based on the classification probability.
  • the recognition result is that the obstacle target is included, it means that there is an obstacle between the light emitting component 21 and the projection surface 1, and the controller 23 will remove the contour area corresponding to the obstacle target from the first sampled image, and determine the characteristic contour area in the first sampled image after the obstacle target is removed. If the recognition result is that the obstacle target is not included, it means that there is no obstacle between the light emitting component 21 and the projection surface 1, and the controller 23 will determine the characteristic contour area based on the first sampled image.
  • the controller 23 will traverse the areas of each contour area in the first sampling image, and screen out the contour area with the largest area as the characteristic contour area.
  • the parameters for filtering the contour areas can also be set with priorities. For example, the area of each contour area is preferentially selected as the first filtering parameter. If the contour areas are the same, the distance between the contour area and the user is selected as the second filtering parameter.
  • priorities For example, the filtering parameters can be interchanged, and some embodiments of this application do not impose specific restrictions on this.
  • the controller 23 may further set a parameter threshold according to the screening parameter before screening the contour area. For example, when the area of the contour area is used as the screening parameter, an area parameter threshold may be set. If the area of the contour area is greater than the parameter threshold, it means that the contour area meets the screening condition and can be used as an alternative contour area; if the area of the contour area is less than the parameter threshold, it means that the area of the contour area is too small, and the user cannot see the projection content in the projection screen at a specified distance, and does not meet the screening condition and cannot be used as an alternative contour area.
  • an area parameter threshold may be set. If the area of the contour area is greater than the parameter threshold, it means that the contour area meets the screening condition and can be used as an alternative contour area; if the area of the contour area is less than the parameter threshold, it means that the area of the contour area is too small, and the user cannot see the projection content in the projection screen at a specified distance, and does not meet the screening condition and cannot be used as an alternative contour area.
  • the controller 23 Before the controller 23 screens the candidate contour regions, it can also generate a contour region list according to the area of each contour region in the first sampling image. If a contour region in the contour region list does not meet the screening conditions, the controller 23 removes the corresponding contour region from the contour region list. After all contour regions are screened, the contour region list contains all the candidate contour regions that meet the screening conditions. At this time, the candidate contour region list is obtained for the user to specify as a characteristic contour region.
  • the controller 23 Before the process of screening the contour area, the controller 23 can also set the number of candidate contour area lists according to the number of candidate contour area lists, for example, to 3. In this case, after traversing the area of each contour area in the first sampling image, the controller 23 screens out three candidate contour areas according to the area size of the contour area, and arranges them in the candidate contour area list according to the default or user-specified order.
  • the controller 23 after traversing the area of each contour in the first sampling image and screening out the alternative wheel library area, the controller 23 will also control the light emitting component 21 to project the feature map card onto the projection surface 1, and after the light emitting component projects the feature map card, control the camera 22 to shoot the feature map card to obtain a second sampling image.
  • FIG21 shows a second sampling image of an embodiment of the present application. Then, the controller 23 can also identify the feature points in the same contour area in the second sampling image based on the alternative contour area screened from the first sampling image, and calculate the average depth of the feature points in the same alternative contour area in the second sampling image.
  • a feature point refers to a point where the grayscale value of the image changes dramatically or a point with a larger curvature on the edge of the image, that is, the intersection of two edges.
  • the annular rectangle in FIG23 is the feature point on the feature map card.
  • the method for calculating the feature points may be geometric triangulation, inverse depth, particle filtering, etc.
  • the controller 23 may also traverse the color values of the feature points in the candidate contour area.
  • the color of the feature point contour should have a large color difference from the solid color chart so that the feature point contour and the feature point position can be clearly reflected.
  • the color of the feature point in order to distinguish from the shadow formed by the obstacle on the projection surface, may be any dark color other than black, and the color of the feature point may not be uniform.
  • a dark color is defined as a color greater than 1/12 of the standard depth of the dye color. The embodiment of the present application does not impose any other restrictions on the color of the feature point.
  • FIG24 shows two candidate area outlines in the second sample image.
  • the two dotted lines in the second sample image are is the contour of the candidate area, wherein the candidate contour area on the left side of the obstacle 32 is defined as the first candidate contour area, and the candidate contour area on the right side of the obstacle 32 is defined as the second candidate contour area.
  • the controller 23 When calculating the average depth of the feature points, the controller 23 will calculate based on the feature points in the same candidate contour area. For the first candidate contour area, the controller 23 will calculate the average depth based on the 9 feature points in the first candidate contour area, and for the second candidate contour area, the controller 23 will calculate the average depth based on the 3 feature points in the second candidate contour area.
  • the controller 23 when the controller 23 calculates the average depth of the feature points in the same candidate contour area in the second sampled image, the color values of the feature points in the candidate contour area can also be traversed.
  • the controller 23 controls the light emitting component 21 to project the feature card, the position of the feature point can be displayed because the color value of the feature point is significantly different from the color value in the pure color card.
  • the light-emitting component 21 projects the projection image onto the projection surface 1.
  • the projection surface 1 is a wall, the same projection image may be projected onto two walls of different distances. At this time, the image displayed on the wall closer to the projection device is larger than the image displayed on the wall farther from the projection device, which will cause the image size to be different and the projection shape to be deformed.
  • the projection surface 1 is a projection screen, due to the placement of the projection equipment, problems such as deformation of the projection image may occur. For example, when the projection equipment is placed, the height in the up and down directions is inconsistent, or the projector is placed to the left or right, resulting in a trapezoidal projection image.
  • the controller 23 can also extract the projection shape based on the color values of multiple feature points.
  • the controller 23 can obtain the boundary feature points in the outline of the candidate area, and the boundary feature points are feature points that can reflect the shape of the candidate outline area.
  • the candidate outline area is a rectangle
  • four feature points corresponding to the four corners of the rectangle can be extracted, and the projection shape can be extracted based on the four feature points. It is also possible to extract two feature points corresponding to the two diagonal corners of the rectangle, and extract the projection shape based on the two feature points.
  • the controller 23 can also define the largest rectangular area in the candidate contour area after screening the candidate contour area, and use the largest rectangular area of the candidate contour area as the effective projection area in the candidate contour area.
  • the controller 23 can also obtain the hardware parameters of the camera 22 and the light output component 21.
  • the hardware parameters of the camera 22 include sensitivity, white balance, metering, focus, exposure compensation and focal length.
  • the controller 23 can also obtain the light of the current environment through the camera and adjust the sensitivity according to the intensity of the light. For a better viewing effect, the light in the environment where the projection device is located is usually weak during projection. Therefore, the sensitivity of the camera 22 needs to be adjusted to a higher value.
  • the focus modes are single-point autofocus AF-S, servo autofocus AF-C, intelligent autofocus AF-A and manual focus.
  • the camera 22 can also detect the brightness of the photo.
  • the brightness of the photo is moderate, just keep it at "0"; when the photo is dark, increase the exposure compensation; when the photo is bright, reduce the exposure compensation.
  • the camera 22 When the camera 22 is set to focus, if you want to capture a wider field of view, you can adjust the lens zoom ring to the minimum; if you want to capture a distant scene, you only need to lengthen the lens. At the same time, the depth of field of the photo taken with a wider focal length is greater, and the depth of field of the photo taken with a longer focal length is shallower.
  • the hardware parameters of the light emitting component 21 include resolution, projection brightness, contrast, focus mode and display ratio.
  • the controller 23 can obtain the standard shape on the feature card in the second sampled image while obtaining the hardware parameters of the camera 22 and the light emitting component 21.
  • the controller 23 can calculate the distance from the light emitting component to each feature point based on the feature points in the candidate contour area, that is, the depth of the feature point.
  • the feature map card will be projected on planes at different distances, and correspondingly, the planes where the feature points on the feature map card are located are also different.
  • the controller 23 will calculate the distances between the multiple feature points and the light output component 21 according to the projection shape extracted from the color values of the multiple feature points, the standard shape on the feature map card in the second sampling image, and the hardware parameters of the camera 22 and the light output component 21. And calculate the average of these distances to obtain the average depth.
  • the controller 23 can respectively calculate the average depth of the feature points of the same plane in the projection surface 1. Before the calculation, the controller 23 can also obtain the number of planes contained in the projection surface 1 by identifying the feature contour of the projection surface 1. As shown in FIG25, the light output component 21 projects the feature map card onto the projection surface 1. There are two raised walls in the projection surface 1, which divide the projection surface 1 into 5 planes, of which 3 planes are at the same distance from the light output component 21, and the other two planes are closer to the light output component 21.
  • the controller 23 can determine the planes included in each candidate contour area, and then calculate the depth of the feature points in the included planes, and calculate the average plane depth of the feature points in the plane according to the depth of the feature points. After calculating the average depth of all planes included in the candidate contour area, the average depth of the feature points in the candidate contour area is calculated according to the average plane depth.
  • the controller 23 may divide the area of the plane into adjacent planes to ensure that each area within the projection surface 1 is calculated.
  • the controller 23 can also calculate the area of each candidate area contour and the area of the second sampling image, and calculate the area ratio of the candidate contour area relative to the second sampling image based on the area of the candidate area contour and the area of the second sampling image.
  • the area ratio can be sorted from large to small according to the numerical value.
  • the controller 23 can also generate a first prompt message based on the above two data, and control the light emitting component 21 to project the first prompt message when the setting state of the obstacle avoidance switch is turned on.
  • the first prompt message includes each candidate contour area, and the average depth and area ratio of the feature points corresponding to each candidate contour area.
  • the user can view the selectable candidate contour areas through the first prompt information projected onto the projection surface 1. And select one of the selectable candidate contour areas as the characteristic contour area. Based on the first prompt information, the user can generate a selection command through a button on the control device of the projection device. After receiving the selection instruction, the controller 23 responds to the selection instruction and marks the candidate contour area specified in the selection instruction as the characteristic contour area.
  • the first prompt information can be displayed in the form of a list, which includes selectable alternative contour areas, and the average depth and area ratio of the feature points corresponding to the alternative contour areas.
  • the controller 23 can also control the light output component 21 to mark the contour part of the selected alternative contour area so that the user can see the area of the alternative contour area more intuitively.
  • the controller 23 can choose other colors different from the color of the feature card or the feature point as the marking color, and the embodiment of the present application does not specifically limit the marking color.
  • S500 Calculating the angle between the projection surface and the light emitting component based on the feature points, and controlling the light emitting component to project the projection content onto the projection surface according to the angle.
  • the controller 23 After determining the characteristic contour area, the controller 23 extracts feature points in the second sampling image according to the characteristic contour area, and calculates the angle between the projection surface 1 and the light emitting component 21 according to the feature points, and controls the light emitting component 21 to project the corrected projection content onto the projection surface 1 according to the angle.
  • the controller 23 can also control the camera 22 to call out the camera coordinate system and obtain the feature point coordinates of the feature points in the feature contour area in the camera coordinate system during the process of calculating the angle between the projection surface 1 and the light emitting component 21 based on the feature points.
  • the feature point coordinates are usually the center of the figure where the feature point is located. For example, when the feature point is a square or a rectangle, the feature point coordinates are the coordinates of the center of the square or rectangle. When the feature point is a circle, the feature point coordinates are the coordinates of the center of the circle.
  • the camera 22 is usually arranged together with the light emitting component 21 in front of the projection device.
  • the controller 23 also needs to control the light emitting component 21 to call out the light emitting component coordinate system.
  • the feature point coordinates are converted into the light emitting point coordinates in the light emitting component coordinate system.
  • the hardware parameters can be the vector displacement value of the center of the lens of the camera 22 and the center of the light emitting component 21.
  • the controller 23 may first obtain the coordinates of the center of the camera lens of the camera 22, as shown in FIG26, where the X1Y1 coordinate system is the light emitting component coordinate system, and the X2Y2 is the camera coordinate system.
  • the coordinates of the center of the camera lens of the camera 22 in the camera coordinate system are (0,0)
  • the above coordinates are then transformed into the light emitting component coordinate system, and the coordinates of the center of the camera lens are repositioned.
  • the coordinates after repositioning are (30,-40).
  • the controller 23 can calculate the vector displacement value of 50 between the center of the lens of the camera 22 and the center of the light emitting component 21 based on the coordinates before and after repositioning.
  • the controller 23 can convert all feature point coordinates in the camera coordinate system into light emitting point coordinates in the light emitting component coordinate system based on the vector displacement value.
  • the controller 23 After converting the feature point coordinates into the light-emitting point coordinates in the light-emitting component coordinate system, the controller 23 can fit a new projection surface 1 in the light-emitting component coordinate system according to the multiple light-emitting point coordinates. After fitting the new projection surface 1, the controller 23 calculates the angle between the projection surface 1 and the light-emitting surface corresponding to the light-emitting component 21, and controls the light-emitting component 21 to project the corrected projection content onto the projection surface according to the angle, so that the user can see the corrected projection picture.
  • the controller 23 in the process of controlling the light emitting component 21 to project the projection content to the projection surface 1 according to the angle, can also obtain the operating parameters of the light emitting component 21, and calculate the projectable area according to the operating parameters and the angle between the projection surface 1 and the light emitting component 21.
  • the operating parameters may include the projection distance, the focal length or resolution of the light emitting component 21, etc.
  • the controller 23 can calculate the area of the projection screen when projecting vertically (90°) according to the projection distance. Then, according to the angle between the projection surface 1 and the light emitting component 21, the trigonometric function value corresponding to the angle is determined. Finally, the projectable area of the projection screen is calculated based on the area and trigonometric function value of the projection screen.
  • the projection screen projected by the projection device is usually rectangular.
  • the controller 23 can also define a target projection area in the projectable area.
  • the target projection area is the largest inscribed rectangular area in the projectable area, and the rectangular area has a preset aspect ratio.
  • the target projection area can be adapted to the projection screen played in horizontal or vertical screen.
  • the controller 23 before defining the target projection area, can also play the video or picture to be projected.
  • the source is identified and the screen playback ratio of the above-mentioned projection content is extracted.
  • the usual playback ratios include 4:3, 16:9, 2.39:1 or 1.85:1, etc. Based on the comparison between the extracted screen playback ratio and the aspect ratio of the rectangular area, it is determined whether the target projection area is the largest inscribed rectangular area in the horizontal direction or the largest inscribed rectangular area in the vertical direction.
  • FIG. 28 shows a flow chart of dividing a target projection area according to an obstacle avoidance instruction, which specifically includes the following steps:
  • S2802 Determine whether an obstacle avoidance instruction is recognized; if an obstacle avoidance instruction is recognized, execute S2803 to S2804; if no obstacle avoidance instruction is recognized, execute S2805 to S2806;
  • S2806 Delineate a target projection area according to the vertex coordinates of the light emitting surface of the light emitting component.
  • the controller 23 may also detect an obstacle avoidance instruction input by the user for starting the obstacle avoidance function. If an obstacle avoidance instruction is detected, the controller 23 switches the obstacle avoidance state of the projection device to on, and defines the target projection area based on the feature points extracted from the second sampled image. If no obstacle avoidance instruction is detected, the obstacle avoidance state of the projection device is still switched to off, and the controller 23 defines the target projection area based on the vertex coordinates of the light emitting surface of the light emitting component 21.
  • the controller 23 may also inversely transform the coordinates of the target projection area to the light emitting surface of the light emitting component, and control the light emitting component to project the projection content according to the inversely transformed coordinates of the target projection area.
  • the embodiment of the present application further provides a projection picture processing method, which is applied to a projection device, the projection device includes a light output component, a camera and a controller.
  • the projection picture processing method includes:
  • the rectified images include pure color charts and feature charts.
  • S200 Acquire a first sampled image obtained by photographing a pure color chart and a second sampled image obtained by photographing a feature chart.
  • the characteristic contour region is the contour region with the largest area in the first sampling image or the contour region specified by the user.
  • S500 Calculating the angle between the projection surface and the light emitting component based on the feature points, and controlling the light emitting component to project the projection content onto the projection surface according to the angle.
  • the projection device controls the light-emitting component to project the corrected images of the primary color card and the feature card. And obtain the first sampling image of the pure color card taken by the camera and the second sampling image of the feature card taken by the camera. Determine the characteristic contour area according to the first sampling image, and extract feature points in the second sampling image according to the characteristic contour area.
  • the angle between the projection surface and the light-emitting component is calculated based on the feature points extracted from the second sampling image, and the light-emitting component projects the projection content to the projection surface according to the angle to ensure that when the projection surface is blocked by obstacles, the projection device can extract feature points according to the characteristic contour area after obstacle avoidance, and project the corrected projection image on the projection surface, so as to improve the user experience when using the projection device.
  • the projection device 2 when the user turns on the projection device 2, the projection device 2 can project the content preset by the user onto the projection surface, which can be a wall or a screen, and the projection image can be displayed on the projection surface for the user to watch.
  • the projection surface which can be a wall or a screen
  • the projection device includes a light emitting component for projecting the projection content onto the projection surface, and the light emitting component includes a laser light source, an optical machine, and a lens.
  • a digital micromirror device is provided in the optical machine, which is the core imaging device of the projection device and is used for image formation.
  • the laser light source provides illumination for the optical machine
  • the DMD in the optical machine adjusts the light source beam according to the projection coordinates of the DMD plane, and outputs it to the lens for imaging, and projects it onto the projection medium to form a projection image.
  • the projection device can avoid the obstacle through the screen movement function.
  • the obstacle avoidance area is not controlled by the user, when there is an obstacle in the center of the projection, the obstacle avoidance strategy will project the obstacle to the left, while the user expects it to be on the right.
  • the problem can also be solved through the screen movement function of the projection device.
  • the screen can be adjusted to a position and size that meets the user's needs through screen scaling and screen movement.
  • the image scaling and image movement are achieved by adjusting the projection coordinates of the DMD plane.
  • the projection device when the projection device is tilted, in order to keep the projection shape projected onto the projection surface as a rectangle, the projection device needs to adjust the projection coordinates of the DMD plane to a trapezoid through automatic trapezoidal correction. At this time, if the projection coordinates of the optical machine DMD plane are directly moved, the projection shape projected onto the projection surface will change, reducing the user experience.
  • FIG30 it is a schematic diagram of the projection coordinate movement of the DMD plane.
  • the trapezoid at position 1 is the projection coordinate shape of the DMD plane when the projection shape projected onto the projection surface is a rectangle after the automatic trapezoid correction when the projection device is projected sideways.
  • the trapezoid at position 1 of the DMD plane moves to position 2. Since the projection shape projected onto the projection surface at position 1 is a rectangle, when it moves to position 2, the right side of the trapezoid at position 1 coincides with the left side of the trapezoid at position 2, and the right side length of the trapezoid at position 1 is greater than the left side length of the trapezoid at position 2.
  • the projection length of the left side length of the trapezoid at position 2 when projected onto the projection surface must be less than the projection length of the right side length of the trapezoid at position 1 when projected onto the projection surface. Therefore, when the projection device is projected sideways, the direct translation of the projection coordinates of the trapezoid on the DMD plane will cause the projection shape projected onto the projection surface to deform.
  • the principle of picture movement is to control the image to be displayed on different projection coordinates of the DMD plane, thereby achieving the effect of projected picture movement.
  • Figure 31 it is a schematic diagram of the projection coordinate movement of the DMD plane when the projection device is projecting forward.
  • DMD has 2K, 4K, 8K and other sizes. Taking 2K as an example, when the projection device is normally projected without correction, obstacle avoidance, scaling and other cropping screen displays, the maximum projection coordinates of the DMD plane are (0,0), (1920,0), (1920,1080), (0,1080), and the maximum projection is presented according to the projection coordinates.
  • the projection screen cannot be translated, that is, the translation of the projection screen must be that the projection coordinates of the DMD plane must be less than the maximum projection coordinates.
  • the projection area is the largest, that is, the maximum projection area, and the projection screen movement must move within the maximum projection area.
  • the projection coordinates of the DMD plane are rectangular, and the projection shape projected onto the projection surface is also rectangular. Therefore, the projection picture is rectangular before and after the movement.
  • FIG32 it is a schematic diagram of the movement of the projection coordinates of the DMD plane when the projection device is projecting sideways.
  • automatic trapezoidal correction is required to adjust the projection coordinates of the DMD plane to a trapezoid. Accordingly, in order to ensure that the projection picture is a rectangle before and after the movement, the trapezoidal shape of the projection coordinates after the movement is inconsistent with the trapezoidal shape of the projection coordinates before the movement.
  • the embodiment of the present application also provides a projection device, as shown in FIG33, including a light output component 21 and a controller 23.
  • the controller can be adjusted according to the projection coordinates of the DMD plane and based on the screen movement instruction to obtain the projection coordinates after the DMD plane moves, and project the projection content onto the projection surface according to the projection coordinates after the movement to solve the problem of deformation of the projection screen when moving.
  • the controller 23 can be used to execute the program steps corresponding to the projection screen movement, including the following contents:
  • S100 Acquire a screen moving instruction input by a user.
  • the screen movement instruction includes a moving direction and a moving distance.
  • the ways of inputting the screen movement instruction may include manual input and voice input.
  • manual input can be input through the physical buttons on the projection device, or the physical buttons on the remote control of the projection device.
  • the moving direction and moving distance can be input through the up, down, left, and right keys on the remote control of the projection device, thereby controlling the movement of the projected screen.
  • Voice input can generate a screen movement instruction by recognizing the user's voice.
  • the user can press the screen movement button on the projection device or the remote control of the projection device, and then input voice, for example, move left three steps, and the projection device recognizes the moving direction and moving distance in the voice, that is, obtains the screen movement instruction.
  • the projection device because the user inputs multiple movement directions or the projection device recognizes multiple movement directions when recognizing the user's voice, there are multiple directions in the acquired picture movement instruction, such as left and right movements at the same time, or up and down movements at the same time, and thus the direction of movement of the projected picture cannot be clearly determined.
  • the controller 23 can respond to the screen movement instruction and detect the movement direction in the screen movement instruction; if the screen movement instruction also contains an opposite movement direction, generate a second prompt information and control the light-emitting component 21 to project the second prompt information.
  • the controller 23 detects the screen movement instruction.
  • a second prompt message is generated to prompt the user that the movement direction input is abnormal, and the light-emitting component 21 is controlled to project the second prompt message onto the projection surface 1.
  • the second prompt message can be a prompt such as "The movement direction is abnormal, please re-enter” to prompt the user to re-enter the screen movement instruction, so that the user re-enters the screen movement instruction, and the controller 23 re-detects the screen movement instruction. If no abnormal movement direction is detected, the subsequent projection screen movement operation can be performed according to the screen movement instruction.
  • the light emitting surface is the plane of the core imaging device DMD used for image formation in the light emitting assembly 21.
  • the vertex coordinates of the current light emitting surface are the vertex coordinates of the light emitting surface after automatic obstacle avoidance or correction of the projection device for image scaling.
  • the projection device can automatically detect obstacles in the projection area, and project the projection image after determining that there are no obstacles in the projection area based on the obstacle detection results, thereby realizing the automatic obstacle avoidance function.
  • the projection area of the projection device before executing the automatic obstacle avoidance process is different from the projection area after completing the obstacle avoidance process.
  • the projected image when the projection angle of the projection device and the distance to the projection surface change, the projected image may be deformed. In order to ensure that the projected image is a rectangle, the projection device may perform automatic keystone correction.
  • a storage module can be pre-configured in the projection device, and the storage module can record and store the automatic obstacle avoidance and automatic trapezoidal correction of the projection device in real time.
  • the projection device After receiving the picture movement instruction, the projection device can read the angle between the projection surface 1 and the light emitting component 21 from the storage module, and calculate the rotation matrix between the projection surface 1 and the light emitting surface.
  • the projection device further includes a camera 22, which is configured to capture the projection content image; the controller 23 is further configured to execute the following contents:
  • S3401 Acquire the projection image captured by the camera
  • S3406 Construct a rotation matrix according to the angle between the projection plane and the light emitting surface.
  • the controller 23 can control the light emitting component 21 to project the calibration chart onto the projection surface, and then control the camera 22 to capture the projection image of the calibration chart.
  • the camera 22 can be a binocular camera or an RGBD camera. Based on the binocular camera or an RGBD camera, the coordinates of the feature points of the calibration chart in the projection image in the camera coordinate system can be obtained.
  • the controller 23 can realize the conversion of the feature point coordinates from the camera coordinate system to the light emitting component coordinate system, and then fit all the feature points in the light emitting component coordinate system, fit the projection plane, and then fit the angle between the projection plane and the light emitting surface. According to the angle between the projection plane and the light emitting surface, the rotation matrix between the projection plane and the light emitting surface can be calculated.
  • S300 Calculate the current projection area according to the rotation matrix and the vertex coordinates of the current light-emitting surface.
  • the controller 23 can convert the vertex coordinates of the current light emitting surface in the light emitting surface coordinate system into coordinates in the projection surface coordinate system based on the acquired transformation matrix, and thus obtain the actual position coordinate values of the current actual projection area.
  • the corresponding coordinates are A(x1,y1), B(x2,y2), C(x3,y3), D(x4,y4), i.e., the vertex coordinates of the current projection area.
  • the vertex coordinates of the current light-emitting surface are substituted into the following formula to calculate the current projection area:
  • M is the hardware parameter of the light-emitting component, that is, the internal parameter
  • R is the rotation matrix
  • S400 Calculating vertex moving distances according to vertex coordinates, moving directions and moving distances in the current projection area.
  • the projection area after being moved according to the screen movement instruction input by the user can be calculated.
  • the vertex coordinates of the current projection area can be moved according to the moving direction and the moving distance to obtain the vertex coordinates of the projection area after the movement, so as to determine the projection area after the movement. Since the size of the projection surface changes with the change of the projection distance, the selection of the vertex moving distance of the vertex coordinates cannot be represented by a fixed length. Therefore, as shown in FIG36, the controller 23 is further configured to execute the following:
  • S3601 Obtain the movement mode to which the movement direction belongs; if the movement mode is left-right movement, execute S3602-S3603; if the movement mode is up-down movement, execute S3604-S3605;
  • S3602 Calculate the projection width ratio according to the vertex coordinates of the current projection area
  • S3604 Calculate the projection height ratio according to the vertex coordinates of the current projection area
  • S3605 Calculate the vertex moving distance according to the projection height ratio and the moving distance.
  • the controller 23 is further configured to perform the following:
  • the first horizontal coordinate difference is the difference between the horizontal coordinates of the two vertices of the upper side of the current projection area
  • the second horizontal coordinate difference is the difference between the horizontal coordinates of the two vertices of the lower side of the current projection area. The difference in coordinates.
  • the first horizontal coordinate difference width1 is the difference between the horizontal coordinates of the two vertices A and B on the upper side of the current projection area
  • the second horizontal coordinate difference width2 is the difference between the horizontal coordinates of the two vertices C and D on the lower side of the current projection area.
  • the projection width ratio is calculated according to the first horizontal coordinate difference and the second horizontal coordinate difference.
  • the projection width ratio is the average of the first horizontal coordinate difference and the second horizontal coordinate difference, that is, (width1+width2)/2.
  • the first ordinate difference is the difference between the ordinates of the two vertices on the left side of the current projection area
  • the second ordinate difference is the difference between the ordinates of the two vertices on the right side of the current projection area.
  • the first horizontal coordinate difference height1 is the difference between the vertical coordinates of the two vertices A and D on the upper side of the current projection area
  • the second horizontal coordinate difference height2 is the difference between the vertical coordinates of the two vertices B and C on the lower side of the current projection area.
  • the projection height ratio is calculated according to the first ordinate difference and the second ordinate difference.
  • the projection height ratio is the average of the first ordinate difference and the second ordinate difference, that is, (height1+height2)/2.
  • the movement mode to which the movement direction belongs is obtained. If the movement mode is left-right movement, the vertex movement distance is calculated according to the following formula:
  • width1 is the first horizontal coordinate difference
  • width2 is the second horizontal coordinate difference
  • step is the moving distance (number of steps) input by the user.
  • the movement mode to which the movement direction belongs is obtained. If the movement mode is up and down movement, the vertex movement distance is calculated according to the following formula:
  • height1 is the first ordinate difference
  • height2 is the second ordinate difference
  • step is the moving distance (number of steps) input by the user.
  • S500 Calculate target projection coordinates according to vertex movement distances and vertex coordinates of the current projection area.
  • the controller 23 is further configured to execute the following:
  • S3701 Obtain the movement mode to which the movement direction belongs; if the movement mode is left-right movement, execute S3702; if the movement mode is up-down movement, execute S3703;
  • S3702 moving the horizontal coordinate of the vertex in the current projection area by the vertex moving distance in the moving direction;
  • S3703 moving the vertical coordinate of the vertex in the current projection area by the vertex moving distance in the moving direction;
  • the target projection coordinates are: A'(x1-Disstep1,y1), B'(x2-Diststep1,y2), C'(x3-Diststep1,y3), D'(x4-Diststep1,y4).
  • the target projection coordinate is: A'(x1+Disstep1,y1), B'(x2+Disstep1,y2), C'(x3+Disstep1,y3), D'(x4+Disstep1,y4).
  • the target projection coordinate is: A'(x1,y1-Disstep2), B'(x2,y2-Disstep2), C'(x3,y3-Disstep2), D'(x4,y4-Disstep2).
  • the target projection coordinate is: A'(x1,y1+Disstep2), B'(x2,y2+Disstep2), C'(x3,y3+Disstep2), D'(x4,y4+Disstep2).
  • the vertex coordinates of the current projection area are the ideal actual projection area calculated based on the vertex coordinates of the current light-emitting surface, there are actual errors. If the calculated projection area is corrected, the error will accumulate as the number of times the projection screen moves increases, and the final effect is that the projection screen is severely deformed. In order to avoid this problem, the projection shape needs to be corrected each time the projection area is calculated to prevent the error transmission introduced in the next movement after the projection screen moves.
  • the correction method is to select the largest inscribed 16:9 rectangle in the calculated projection areas A, B, C, and D, and then move the projection screen.
  • the projection image needs to be moved within the maximum projection area that the light emitting component 21 can project. Therefore, according to the image movement instruction, after determining the projection area after the movement, the controller 23 needs to determine whether the projection area after the movement is within the maximum projection area, and control
  • the device 23 is also configured to perform the following:
  • the maximum projection area is calculated based on the rotation matrix and the vertex coordinates of the maximum light-emitting surface.
  • the vertex coordinates of the maximum light-emitting surface of the light-emitting component 21 are amax(0,0), bmax(1920,0), cmax(1920,1080), and dmax(0,1080).
  • the vertex coordinates of the maximum projection area are calculated to be Amax(x1max,y1max), Bmax(x2max,y2max), Cmax(x3max,y3max), and Dmax(x4max,y4max).
  • the specific calculation formula please refer to the calculation steps of the current projection area, which will not be described in detail.
  • the maximum projection area includes the target projection coordinates, a step of converting the target projection coordinates to the light emitting surface is performed.
  • first prompt information is generated, and the light emitting component 21 is controlled to project the first prompt information.
  • the controller 23 can continue to execute the step of converting the target projection coordinates to the light-emitting surface. If there are target projection coordinates that are not within the maximum projection area, the projection area after the move is not completely within the maximum projection area, that is, it is moved outside the maximum projection area.
  • the controller 23 can generate a first prompt message for prompting the user that the projection area after the move has reached the boundary, and control the light-emitting component to project the first prompt message to the projection surface, prompting the user that the projection picture after the move according to the picture movement instruction exceeds the maximum projection area that can be projected.
  • the principle of two-dimensional vector cross multiplication is used: that is, vector a and vector b are cross-multiplied. If the result is less than 0, it means that vector b is in the clockwise direction of vector a; if the result is greater than 0, it means that vector b is in the counterclockwise direction of vector a; if it is equal to 0, it means that vector a and vector b are collinear.
  • clockwise and counterclockwise means that the two vectors are translated to the starting point and connected, and the rotation from a certain direction to another vector is less than 180 degrees.
  • the vertex of the projection area after moving is within the maximum projection area, when walking along the maximum projection area, the vertex always maintains the same direction relative to the walking path. If the vertex of the projection area after moving is outside the maximum projection area, when walking along the maximum projection area, the vertex will have a different direction relative to the walking path.
  • the controller 23 is further configured to execute the following contents:
  • S3801 Determine a side length vector and a connection vector according to the vertex coordinates and the target projection coordinates of the maximum projection area, wherein the side length vector is a vector between the coordinates of two adjacent vertices of the maximum projection area, and the connection vector is a vector between the starting point coordinates of each side length vector and the target projection coordinates;
  • S3802 Perform vector cross multiplication on the edge length vector and the connection vector respectively to obtain a cross product vector
  • S3803 Determine whether the values of the cross-product vectors have the same sign; if the values of the cross-product vectors have the same sign or are zero, execute S3804; if the values of the cross-product vectors have different signs, execute S3805;
  • connection vectors corresponding to the connection lines between the four vertices of the maximum projection area and the vertex coordinates A' of the projection area after the movement are calculated.
  • connection vector Perform vector cross multiplication on the four side length vectors and the four connection vectors respectively to obtain four cross product vectors. If the cross product vector is less than 0, the connection vector is located in the clockwise direction of the side length vector. If the cross product vector is greater than 0, the connection vector is located in the counterclockwise direction of the side length vector. If the cross product vector is equal to 0, the connection vector and the side length vector are collinear.
  • a first prompt message is generated to prompt the user to move to the edge.
  • the controller 23 can select four side length vectors of the four sides of the maximum projection area in a counterclockwise direction, which are Amax Dmax, DmaxCmax, CmaxBmax, and BmaxAmax.
  • the four side length vectors are respectively cross-multiplied with the four connection vectors to obtain four cross-product vectors.
  • the values of the four cross-product vectors are judged in turn. If the values of the cross-product vectors are of different signs and there are values less than 0, that is, there is a connection vector located in the clockwise direction of the side length vector, then it is determined that the maximum projection area does not include the vertex coordinates A' of the projection area after the move.
  • the connection vector is located in the counterclockwise direction of the side length vector or the connection vector is collinear with the side length vector, then it is determined that the maximum projection area includes the vertex coordinates A' of the projection area after the move.
  • the vertex coordinates of the projection area after the move are determined in sequence. If a vertex is determined to be outside the maximum projection area, a first prompt message is directly generated. For example, when the controller 23 determines in sequence whether the vertices A', B', C', and D' of the projection area after the move are within the maximum projection area, it determines whether the vertex A' is within the maximum projection area according to the above steps. If the vertex A' is within the maximum projection area, it continues to determine whether the vertex B' is within the maximum projection area. If the vertex B' is outside the maximum projection area, the first prompt message is directly generated.
  • the target projection coordinates are converted to the light-emitting surface to obtain the light-emitting projection coordinates, and the light-emitting component is controlled to project the projection content onto the projection surface according to the light-emitting projection coordinates.
  • the controller 23 can convert the vertex coordinates of the projection area after moving into the light-emitting projection coordinates in the light-emitting component coordinate system based on the acquired transformation matrix, and thus obtain the actual position coordinate values required for the light-emitting component to project the projection picture after moving according to the picture movement instruction. It can also be understood that the light-emitting component can complete an invisible moving projection picture after projecting the playback content according to the light-emitting projection coordinates.
  • the vertex coordinates of the moved projection area are A', B', C', and D', all of which are within the maximum projection area.
  • the vertex coordinates of the moved projection area are converted to the light-emitting component coordinate system to obtain the light-emitting projection coordinates.
  • the light-emitting component projects the projection content onto the projection surface according to the light-emitting projection coordinates, as shown in Figure 40, thus realizing the movement of the projection image.
  • the embodiment of the present application also provides a projection image processing method, which is applied to a projection device, and the projection device includes: a light output component and a controller.
  • the projection image processing method includes:
  • the picture moving instruction includes the moving direction and the moving distance.
  • S300 Calculate the current projection area according to the rotation matrix and the vertex coordinates of the current light-emitting surface.
  • S400 Calculating vertex moving distances according to vertex coordinates, moving directions and moving distances in the current projection area.
  • S500 Calculate target projection coordinates according to vertex movement distances and vertex coordinates of the current projection area.
  • the target projection coordinates are converted to the light-emitting surface to obtain the light-emitting projection coordinates, and the light-emitting component is controlled to project the projection content onto the projection surface according to the light-emitting projection coordinates.
  • the projection device can obtain the vertex coordinates of the current light-emitting surface and the rotation matrix between the projection surface and the light-emitting surface.
  • the current projection area is calculated based on the rotation matrix and the vertex coordinates of the current light-emitting surface
  • the vertex movement distance is calculated based on the vertex coordinates
  • movement direction and movement distance of the current projection area is calculated based on the vertex movement distance and the vertex coordinates of the current projection area.
  • the target projection coordinates are converted to the light-emitting surface to obtain the light-emitting projection coordinates, and the light-emitting component is controlled to project the projection content onto the projection surface according to the light-emitting projection coordinates, so as to solve the problem of deformation of the projection screen when the projection device moves the screen under side projection, thereby improving the user experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Projection Apparatus (AREA)
  • Transforming Electric Information Into Light Information (AREA)

Abstract

本申请公开一种投影设备及投影画面处理方法,投影设备包括出光组件、相机和控制器。出光组件被配置为投射投影内容至投影面,相机被配置为拍摄采样图像,控制器响应于不同的指令可以实现投影画面的避障、矫正、移动等处理,提高用户体验。

Description

投影设备及投影画面处理方法
相关申请的交叉引用
本申请要求在2022年09月29日提交中国专利局、申请号为202211203032.2;本申请要求在2022年09月29日提交中国专利局、申请号为202211212932.3;本申请要求在2022年09月29日提交中国专利局、申请号为202211195978.9的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及显示设备技术领域,尤其涉及一种投影设备及投影画面处理方法。
背景技术
投影设备是一种可以将图像或视频投射到屏幕上的显示设备。投影设备可以将特定颜色的激光光线通过光学透镜组件的折射作用,投射到屏幕上形成具体影像。进而将显示图像以更大的比例呈现给用户,提升用户的观看体验。
在投影过程中,需要将投影设备与屏幕之间保持一定距离,使屏幕上形成的影像可以符合光学透镜组件的焦距范围,以获得清晰的影像。无论是投影设备的放置位置、设置角度的改变,或者是投影面的缺陷、投影面与投影设备之间相对位置的改变,或者是投影设备与投影面之间出现障碍等原因都可能造成投影效果受到影响,降低用户体验。
发明内容
本申请实施例提供一种投影设备,包括:出光组件,被配置为投射投影内容至投影面;相机,被配置为拍摄采样图像;控制器,被配置为:响应于避障指令,获取变换矩阵,以及获取用户输入的调整指令,所述变换矩阵为所述相机与所述出光组件之间坐标的单应性矩阵;所述调整指令中包括用户指定的第一障碍目标;在采样图像中识别所述第一障碍目标,所述采样图像为所述相机在所述出光组件投射矫正图像时拍摄获得的图像;根据所述第一障碍目标在所述采样图像中划定可投影区域,所述可投影区域为所述采样图像中除所述第一障碍目标以外区域所容纳的最大预设宽高比的矩形区域;按照所述可投影区域和所述变换矩阵确定目标投影区域,以及控制所述出光组件将投影内容投射至所述目标投影区域。
本申请实施例还提供一种投影画面处理方法,应用于投影设备,所述投影设备包括出光组件、相机以及控制器;所述投影画面处理方法包括:响应于避障指令,获取变换矩阵,以及获取用户输入的调整指令,所述变换矩阵为所述相机与所述出光组件之间坐标的单应性矩阵;所述调整指令中包括用户指定的第一障碍目标;在采样图像中识别所述第一障碍目标,所述采样图像为所述相机在所述出光组件投射矫正图像时拍摄获得的图像;根据所述第一障碍目标在所述采样图像中划定可投影区域,所述可投影区域为所述采样图像中除所述第一障碍目标以外区域所容纳的最大预设宽高比的矩形区域;按照所述可投影区域和所述变换矩阵确定目标投影区域,以及控制所述出光组件将投影内容投射至所述目标投影区域。
本申请实施例还提供一种投影设备,包括:出光组件,被配置为投射投影内容至投影面;相机,被配置为拍摄采样图像;控制器,被配置为:响应于投影画面矫正指令,控制所述出光组件投射矫正图像,所述矫正图像包括纯色图卡和特征图卡;获取所述相机对所述纯色图卡拍摄获得的第一采样图像,以及对所述特征图卡拍摄获得的第二采样图像;根据所述第一采样图像确定特征轮廓区域,所述特征轮廓区域为所述第一采样图像中面积最大的轮廓区域或用户指定的轮廓区域;按照所述特征轮廓区域在所述第二采样图像中提取特征点;基于所述特征点计算投影面与所述出光组件之间的夹角,以及控制所述出光组件根据所述夹角投射投影内容至投影面。
本申请实施例还提供一种投影画面处理方法,应用于投影设备,所述投影设备包括出光组件、相机以及控制器;所述投影画面处理方法包括:响应于投影画面矫正指令,控制所述出光组件投射矫正图像,所述矫正图像包括纯色图卡和特征图卡;获取所述相机对所述纯色图卡拍摄获得的第一采样图像,以及对所述特征图卡拍摄获得的第二采样图像;根据所述第一采样图像确定特征轮廓区域,所述特征轮廓区域为所述第一采样图像中面积最大的轮廓区域或用户指定的轮廓区域;按照所述特征轮廓区域在所述第二采样图像中提取特征点;基于所述特征点计算投影面与所述出光组件之间的夹角,以及控制所述出光 组件根据所述夹角投射投影内容至投影面。
本申请实施例还提供一种投影设备,包括:出光组件,被配置为投射投影内容至投影面;控制器,被配置为:获取用户输入的画面移动指令,所述画面移动指令包括移动方向和移动距离;响应于所述画面移动指令,获取当前出光面的顶点坐标以及投影面与出光面之间的旋转矩阵;根据所述旋转矩阵与所述当前出光面的顶点坐标计算当前投影区域;根据所述当前投影区域的顶点坐标、所述移动方向和所述移动距离计算顶点移动距离;按照所述顶点移动距离和所述当前投影区域的顶点坐标计算目标投影坐标;基于所述旋转矩阵,将所述目标投影坐标转换至所述出光面,得到出光投影坐标,以及控制所述出光组件按照所述出光投影坐标将投影内容投射至投影面。
本申请实施例还提供一种投影画面处理方法,应用于投影设备,所述投影设备包括出光组件以及控制器;所述投影画面处理方法包括:获取用户输入的画面移动指令,所述画面移动指令包括移动方向和移动距离;响应于所述画面移动指令,获取当前出光面的顶点坐标以及投影面与出光面之间的旋转矩阵;根据所述旋转矩阵与所述当前出光面的顶点坐标计算当前投影区域;根据所述当前投影区域的顶点坐标、所述移动方向和所述移动距离计算顶点移动距离;按照所述顶点移动距离和所述当前投影区域的顶点坐标计算目标投影坐标;基于所述旋转矩阵,将所述目标投影坐标转换至所述出光面,得到出光投影坐标,以及控制所述出光组件按照所述出光投影坐标将投影内容投射至投影面。
附图说明
图1为根据本申请实施例的投影设备的投影状态示意图;
图2为根据本申请实施例的投影设备的结构示意图;
图3为根据本申请实施例的投影设备的电路架构示意图;
图4为根据本申请实施例的投影设备的光路示意图;
图5为根据本申请实施例的镜头的结构示意图;
图6为本申请实施例的距离传感器和相机的结构示意图;
图7为根据本申请实施例的投影设备的系统框架示意图;
图8为根据本申请实施例的投影设备执行自动避障的示意图;
图9为根据本申请实施例的投影设备根据用户指令执行自动避障的示意图;
图10为根据本申请实施例的根据用户指令执行避障的示意图;
图11为根据本申请实施例的投影设备投射纯色图卡和特征图卡的示意图;
图12为根据本申请实施例的投影设备根据第一障碍目标划分可投影区域的示意图;
图13为根据本申请实施例的投影设备根据向媒资数据流插入图卡的示意图;
图14为根据本申请实施例的投影设备向媒资数据流插入图卡后执行步骤的示意图;
图15为根据本申请实施例的投影设备识别用户输入的语音指令的示意图;
图16为根据本申请实施例的投影设备判断用户输入的调整指令是否符合标准范围的示意图;
图17为根据本申请实施例的投影设备识别第二故障目标的示意图;
图18为根据本申请实施例的投影画面处理方法的流程图之一;
图19为根据本申请实施例的投影面倾斜时投影设备的成像示意图;
图20为根据本申请实施例的投影设备与投影面之间存在障碍物时的成像示意图;
图21为根据本申请实施例的投影设备执行画面矫正的示意图;
图22为根据本申请实施例的相机拍摄的第一采样图像示意图;
图23为根据本申请实施例的相机拍摄的第二采样图像示意图;
图24为根据本申请实施例的在第二采样图像提取备选轮廓区域示意图;
图25为根据本申请实施例的投影面存在凹凸情况的出光组件投射示意图;
图26为根据本申请实施例的相机和出光组件的坐标系转换示意图;
图27为根据本申请实施例的根据投影画面划定目标投影区域的示意图;
图28为根据本申请实施例的根据避障指令划定目标投影区域的示意图;
图29为根据本申请实施例的投影画面处理方法的流程图之二;
图30为根据本申请实施例的DMD平面的投影坐标移动的示意图;
图31为根据本申请实施例的投影设备正投时DMD平面的投影坐标移动的示意图;
图32为根据本申请实施例的投影设备侧投时DMD平面的投影坐标移动的示意图;
图33为根据本申请实施例的执行画面移动的示意图;
图34为根据本申请实施例的获取旋转矩阵的流程示意图;
图35为根据本申请实施例的当前投影区域的示意图;
图36为根据本申请实施例的计算顶点移动距离的流程示意图;
图37为根据本申请实施例的计算目标投影坐标的流程示意图;
图38为根据本申请实施例的确定最大投影区域是否包含目标投影坐标的流程示意图;
图39为根据本申请实施例的边长向量与连接向量的示意图;
图40为根据本申请实施例的移动后的投影画面的示意图;
图41为根据本申请实施例的投影画面处理方法的流程图之三。
具体实施方式
为使本申请的目的和实施方式更加清楚,下面将结合本申请示例性实施例中的附图,对本申请示例性实施方式进行清楚、完整地描述,显然,描述的示例性实施例仅是本申请一部分实施例,而不是全部的实施例。需要说明的是,本申请中对于术语的简要说明,仅是为了方便理解接下来描述的实施方式,而不是意图限定本申请的实施方式。除非另有说明,这些术语应当按照其普通和通常的含义理解。
投影仪是一种可以将图像、或视频投射到屏幕上的设备,投影仪可以通过不同的接口同计算机、广电网络、互联网、视频压缩盘片(Video Compact Disc,简称VCD)、数字视频光盘(Digital Video Disk,简称DVD)、游戏机、数码摄像机(Digital Video,简称DV)等相连接播放相应的视频信号。投影仪广泛应用于家庭、办公室、学校和娱乐场所等。
图1为根据本申请实施例的投影设备的投影状态示意图,图2为根据本申请实施例的投影设备的结构示意图。
在一些实施例中,参考图1-2,投影面1固定于第一位置,投影面1可以为投影屏幕或墙壁。投影设备2放置于第二位置,使得其投影出的画面与投影面1吻合。投影设备2包括激光光源100,光机200,镜头300,投影画面被投射至投影介质400上。其中,激光光源100为光机200提供照明,光机200对光源光束进行调制,并输出至镜头300进行成像,投射至投影介质400形成投影画面。由于激光光源100,光机200,镜头300共同用于发出投影光,以投射投影画面,因此将激光光源100,光机200,镜头300统称为出光组件21。
在一些实施例中,激光光源100包括激光器组件和光学镜片组件,激光器组件发出的光束可透过光学镜片组件进而为光机提供照明。其中,光学镜片组件需要较高等级的环境洁净度、气密等级密封;而安装激光器组件的腔室可以采用密封等级较低的防尘等级密封,以降低密封成本。
在一些实施例中,投影仪的发光部件还可以通过LED光源实现。
在一些实施例中,光机200可实施为包括蓝色光机、绿色光机、红色光机,还可以包括散热系统、电路控制系统等。
图3为根据本申请实施例的投影设备的电路架构示意图。
在一些实施例中,参考图3,投影设备可以包括显示控制电路10、激光光源20、至少一个激光器驱动组件30以及至少一个亮度传感器40,该激光光源20可以包括与至少一个激光器驱动组件30一一对应的至少一个激光器。其中,该至少一个是指一个或多个,多个是指两个或两个以上。
基于该电路架构,投影设备可以实现自适应调整。例如,通过在激光光源20的出光路径中设置亮度传感器40,使亮度传感器40可以检测激光光源20的第一亮度值,并将第一亮度值发送至显示控制电路10。
该显示控制电路10可以获取每个激光器的驱动电流对应的第二亮度值,并在确定激光器的第二亮度值与激光器的第一亮度值的差值大于差值阈值时,确定该激光器发生灾变性光学镜面损伤(Catastrophic Optical Damage,简称COD)故障;则显示控制电路10可以调整激光器的对应的激光器驱动组件的电流控制信号,直至该差值小于等于该差值阈值,从而消除该激光器的COD故障。投影设备能够及时消除激光器的COD故障,降低激光器的损坏率,提高投影设备的图像显示效果。
图4为根据本申请实施例的投影设备的光路示意图。
在一些实施例中,参考图3-4,激光光源20可以包括独立设置的蓝色激光器201、红色激光器202和绿色激光器203,该投影设备也可以称为三色投影设备,蓝色激光器201、红色激光器202和绿色激光器 203均为小型激光器(Multi_chip LD,简称MCL),其体积小,利于光路的紧凑排布。
在一些实施例中,激光光源还包括光学组件210,光学组件210用于将蓝色激光器201、红色激光器202和绿色激光器203出射的激光合光,并进行整形和匀化,最终将符合入射需求的光束入射光机。
在一些实施例中,投影设备可以配置控制器,或者投影设备可以连接控制器。控制器包括中央处理器(Central Processing Unit,简称CPU),视频处理器,音频处理器,图形处理器(Graphics Processing Unit,简称GPU),随机存取存储器(Random Access Memory,简称RAM),只读存储器(Read-Only Memory,简称ROM),用于输入/输出的第一接口至第n接口,通信总线(Bus)等中的至少一种。
在一些实施例中,投影设备可以配置相机,或者投影设备可以连接相机,用于和投影设备协同运行,以实现对投影过程的调节控制。相机可具体实施为3D相机,或双目相机;在相机实施为双目相机时,具体包括左相机、以及右相机;双目相机可获取投影设备对应的投影屏幕,即投影面所呈现的图像及播放内容,该图像或播放内容由投影设备内置的光机进行投射。
相机可以用于拍摄投影面中显示的图像,可以是摄像头。摄像头可以包括镜头组件,镜头组件中设有感光元件和透镜。透镜通过多个镜片对光线的折射作用,使景物的图像的光能够照射在感光元件上。感光元件可以根据摄像头的规格选用基于电荷耦合器件或互补金属氧化物半导体的检测原理,通过光感材料将光信号转化为电信号,并将转化后的电信号输出成图像数据。
图5为根据本申请实施例的镜头的结构示意图。
为了支持投影设备2的自动调焦过程,如图5所示,镜头300还可以包括透镜组件310和驱动马达320。其中,透镜组件310是由一个或多个透镜组成的透镜组,可以对光机200发射的光线进行折射,使光机200发出的光线能够透射到投影面上,形成透射内容影像。
透镜组件310可以包括镜筒以及设置在镜筒内的多个透镜。根据透镜位置是否能够移动,透镜组件310中的透镜可以划分为移动镜片311和固定镜片312,通过改变移动镜片311的位置,调整移动镜片311和固定镜片312之间的距离,改变透镜组件310整体焦距。因此,驱动马达320可以通过连接透镜组件310中的移动镜片311,带动移动镜片311进行位置移动,实现自动调焦功能。
需要说明的是,调焦过程是指通过驱动马达320改变移动镜片311的位置,从而调整移动镜片311相对于固定镜片312之间的距离,即调整像面位置,因此透镜组件310中镜片组合的成像原理,调整焦距实则为调整像距,但就透镜组件310的整体结构而言,调整移动镜片311的位置等效于调节透镜组件310的整体焦距调整。
当投影设备2与投影面1之间相距不同距离时,需要投影设备2的镜头调整不同的焦距从而在投影面上透射清晰的图像。而在投影过程中,投影设备2与投影面1的间隔距离会受用户的摆放位置的不同而需要不同的焦距。因此,为适应不同的使用场景,投影设备2需要调节透镜组件310的焦距。
图6为根据本申请实施例的距离传感器和相机的结构示意图。
如图6所示,投影设备2还可以内置或外接相机22,相机22可以对投影设备2投射的画面进行图像拍摄,以获取投影内容图像。投影设备2再通过对投射内容图像进行清晰度检测,确定当前镜头焦距是否合适,并在不合适时进行焦距调整。基于相机22拍摄的投影内容图像进行自动调焦时,投影设备2可以通过不断调整镜头位置并拍照,并通过对比前后位置图片的清晰度找到调焦位置,从而将移动镜片311调整至合适的位置。例如,控制器23可以先控制驱动马达320将移动镜片311调焦起点位置逐渐移动至调焦终点位置,并在此期间不断通过相机22获取投影内容图像。再通过对多个投影内容图像进行清晰度检测,确定清晰度最高的位置,最后控制驱动马达320将移动镜片311从调焦终端调整到清晰度最高的位置,完成自动调焦。
当投影设备移动位置后,其投射角度、及至投影面距离发生变化,会导致投影图像发生形变,投影图像会显示为梯形图像、或其他畸形图像;控制器可基于相机拍摄的图像,通过耦合光机投影面之间夹角和投影图像的正确显示实现自动梯形校正。
图7为根据本申请实施例的投影设备的系统框架示意图。
在一些实施例中,参考图7,投影设备具备长焦微投的特点,其控制器通过预设算法可对投影图像进行显示控制,以实现显示画面自动梯形校正、自动入幕、自动避障、自动调焦、以及防射眼等功能。
在一些实施例中,投影设备配置有陀螺仪传感器;设备在移动过程中,陀螺仪传感器可感知位置移动、并主动采集移动数据;然后通过系统框架层将已采集数据发送至应用程序服务层,支撑用户界面交互、应用程序交互过程中所需应用数据,采集数据还可用于控制器在算法服务实现中的数据调用。
在一些实施例中,投影设备配置有飞行时间传感器,在飞行时间传感器采集到相应数据后,被发送至服务层对应的飞行时间服务;上述飞行时间服务获取数据后,将采集数据通过进程通信框架发送至应用程序服务层,数据将用于控制器的数据调用、用户界面、程序应用等交互使用。
在一些实施例中,投影设备配置有用于采集图像的相机,该相机可为双目相机、深度相机或3D相机等;相机采集数据将发送至摄像头服务,然后由摄像头服务将采集图像数据发送至进程通信框架、和/或投影设备校正服务;投影设备校正服务可接收摄像头服务发送的相机采集数据,控制器针对所需实现的不同功能可在算法库中调用对应的控制算法。
在一些实施例中,通过进程通信框架、与应用程序服务进行数据交互,然后经进程通信框架将计算结果反馈至校正服务;校正服务将获取的计算结果发送至投影设备操作系统,以生成控制信令,并将控制信令发送至光机控制驱动以控制光机工况、实现显示图像的自动校正。
在一些实施例中,当检测到图像校正指令时,投影设备2可以对投影图像进行校正。对于投影图像的校正,可预先创建距离、水平夹角及偏移角之间的关联关系。然后投影设备2中的控制器23通过获取出光组件至投影面的当前距离,结合所属关联关系确定该时刻出光组件21与投影面的夹角,实现投影图像校正。其中,上述夹角具体为出光组件21中轴线与投影面的夹角。
在一些实施例中,投影设备2自动完成校正后重新调焦,检测自动调焦功能是否开启;当自动调焦功能未开启时,结束自动调焦业务;当自动调焦功能开启时,投影设备2将通过中间件获取飞行时间传感器的检测距离进行计算。
在一些实施例中,投影设备通过自动调焦算法,利用其配置的激光测距可获得当前物距,以计算初始焦距、及搜索范围;然后投影设备驱动相机进行拍照,并利用对应算法进行清晰度评价。投影设备在上述搜索范围内,基于搜索算法查找可能的最佳焦距,然后重复上述拍照、清晰度评价步骤,最终通过清晰度对比找到最优焦距,完成自动调焦。
例如,在投影设备启动后,用户移动设备;投影设备自动完成校正后重新调焦,控制器将检测自动调焦功能是否开启;当自动调焦功能未开启时,控制器将结束自动调焦业务;当自动调焦功能开启时,投影设备将通过中间件获取飞行时间传感器的检测距离进行计算。
控制器根据获取的距离查询预设的映射表,以获取投影设备的焦距;然后中间件将获取焦距设置到投影设备的出光组件;出光组件以上述焦距进行发出激光后,摄像头将执行拍照指令;控制器根据获取的拍摄图像、评价函数,判断投影设备调焦是否完成。如果判定结果符合预设完成条件,则控制自动调焦流程结束;如果判定结果不符合预设完成条件,中间件将微调投影设备出光组件的焦距参数,例如可以预设步长逐渐微调焦距,并将调整的焦距参数再次设置到出光组件;从而实现反复拍照、清晰度评价步骤,最终通过清晰度对比找到最优焦距完成自动调焦。
在一些实施例中,如图8所示,投影设备2在向墙壁(投影面1)投射媒资数据时,识别到墙壁上有控制开关(障碍物3)。投影设备2在识别到控制开关后认定控制开关为障碍物3,会影响投影效果,因此启动自动避障功能。自动避障功能执行后,投影设备2避开控制开关(障碍物3)重新在墙壁(投影面1)上划分投影区域并向投影区域投射媒资数据。但可以看出,重新划分后的投影区域的面积小于重新划分前的投影区域。
投影设备2在投射媒资数据时,根据划分好的投影区域可以选择多种投射比例以达到最好的投影效果,并且多种投射比例也可以满足用户的多种需求。但因为自动避障导致投影区域面积变小,直接导致投射媒资数据时的比例选择丰富程度降低,达不到最优投影效果。
有鉴于此,如图9和图10所示,本申请部施例提供了一种投影设备,包括出光组件21、相机22和控制器23。出光组件21用于投射投影内容至投影面,相机22用于拍摄采样图像,在识别到投影区域存在障碍物时还获取用户输入的有关障碍物信息的指令,结合用户输入的指令判断是否需要避开投影区域的障碍物。控制器23被配置为:
S100:响应于避障指令,获取变换矩阵,以及获取用户输入的调整指令。
变换矩阵为相机22与出光组件21之间坐标的单应性矩阵,用于投影区域中同一个位置的目标在相机坐标系和出光组件的出光组件坐标系之间的坐标转换。在有障碍物3出现时,首先需要通过相机22拍摄投影区域的图像以确认障碍物2的位置。接着控制器23根据障碍物的位置,结合投射的媒资数据需要的投影区域控制出光组件21对障碍物进行自动避障。
在相机22拍摄的图像中,障碍物3处于相机的坐标系下,其坐标根据相机坐标系确认。而在出光组 件21向投影区域投射媒资数据时,媒资数据在投影区域上的分布的坐标根据出光组件坐标系确认。因此,投影区域的障碍物3的位置在相机坐标系下的坐标还需转换为出光组件坐标系下的坐标,控制器23通过识别出光组件坐标系下的投影区域内存在障碍物3,以实现自动避障。坐标转换的计算依据就是依靠相机与光机之间的单应性矩阵。
调整指令包括用户指定的投影区域内的第一障碍目标,因投影区域内可存在多个障碍目标,此时就需要对多个障碍目标进行避障确认并执行避障。第一障碍目标可以指的是多个同类目标障碍物,比如说墙壁上的多个控制开关。也可以指单个目标障碍物,控制器23根据投影区域的实际情况进行避障处理。第一障碍目标为用户想要投影设备在运行时避开的障碍物,即第一障碍目标会影响投影效果。
用户输入调整指令的时间与投影设备响应于避障指令的时间并不做先后顺序的限定,在投影设备检测投影区域的过程中,用户可以输入调整指令以确定想要避开的障碍物;也可以在投影设备开启后,用户立刻输入调整指令以确定想要避开的障碍物;还可以在开始投射媒资数据后,用户输入发现投射前未选择避开的障碍物实际上影响到了投影效果,从而输入调整指令,以避开障碍物。
在一些实施例中,投影设备开启后,根据预设程序开始对投影区域进行障碍物检测。同时,用户向投影设备输出语音指令“避开墙上的挂钩”,控制器23通过音频接收装置收到用户的指令。与此同时,获取用于相机坐标系和出光组件坐标系转换的变换矩阵为后续障碍物的坐标转换提供计算依据。
输出调整指令可以通过输入语音的方式、同时也可以通过例如遥控器的遥控装置对投影设备输出调整指令、还可以通过投影设备上的控制界面输入调整指令。但输入语音的方式对于用户来说实现十分容易,并且提升用户与投影设备的交互感,进而提升了用户体验。因此用户输出调整指令的主要方式可以为通过语音输出,其余方式可以在特定场景下辅助用户输出调整指令。
S200:在采样图像中识别第一障碍目标,根据第一障碍目标在采样图像中划定可投影区域。
如图12所示,采样图像为相机22在出光组件21投射矫正图像时拍摄获得的图像。出光组件21投射的矫正图像是为了获取整个投影区域以及用户指出的需要避开的障碍物的图像信息,并根据出光组件坐标系为采样图像建立坐标系。通过矫正图像的辅助可以更为准确的识别出障碍物的存在,再通过坐标系的配合就能得到障碍物在采样图像中的位置。控制器23可以根据障碍物在采样图像中的位置,对投影区域重新划分,并将划分后的投影区域按照单应性矩阵进行转换,进而控制出光组件21向划分后的投影区域投射媒资数据。
沿用上述实施例,投影设备在获取用户输出的调整指令后,确认挂钩为障碍物,即判定投影区域内存在障碍物。判定投影区域内存在障碍物后,控制器23控制出光组件21向投影区域的墙壁投射矫正图像,在矫正图像投射至墙壁上时,还控制相机22拍摄采样图像(即拍摄投影区域)。控制器23对采样图像上的挂钩的位置进行确认,并且根据去除挂钩后的采样图像,重新划分新的可投影区域。
在可投影区域的划分时,可投影区域为采样图像中除去第一障碍目标以外区域所容纳的最大预设宽高比的矩形区域。因使用投影设备的主要原因就是想要获得更大宽高比的媒资数据播放效果,因此要尽可能保证可投影区域拥有更大的宽高比。此外,当前只是对第一障碍目标进行检测,投影区域内可能还存在其他障碍目标,只是当前用户没有输出相关指令。为了保证能够顺利投射媒资数据,也需要预留出更大的投影区域以应对投影区域的继续划分。
通过矫正图像识别障碍物的过程中,只投射一次或投射完全相同的矫正图像不能准确的识别障碍物,因此矫正图像包括两种不同的图像,通过对两次图像上的信息比对,以确认障碍物及障碍物的位置,如图11所示,本申请实施例中的矫正图像包括纯色图卡和特征图卡。控制器23在响应于避障指令后还执行以下操作:
响应于避障指令,控制出光组件投射矫正图像;
控制相机分别在出光组件投射纯色图卡和特征图卡时,对投影面上显示的矫正图像执行拍照,以获得第一采样图像和第二采样图像。
如图11所示,矫正图像包括的纯色图卡一般采用白色图卡,而特征图卡的画面上包含特征图像,特征图像的具体形式不做限定。通过特征图像与白色图卡图像的对比,有利于发现投影区域中的障碍物。
通过出光组件投射白色图卡和特征图卡的同时,控制器不能直接通过投射效果对障碍物进行判断。因此还需要相机将投射白色图卡和特征图卡时的投影区域的状态拍摄记录,分别生成第一采样图像和第二采样图像。控制器通过对第一采样图像和第二采样图像进行对比以获得障碍物的相关信息。
因出光组件投射白色图卡而产生的第一采样图像,在相机拍摄时,墙壁上在无障碍物的情况下,在 第一采样图像上不会呈现出内容,可能会呈现出与墙壁本色不太相同的白色图像。但在有障碍物的情况下,则相机还会拍摄到墙壁上的障碍物。但是只依靠第一采样图像上的内容不足以判断墙壁上的障碍物的位置信息,在重新划定可投影范围时,因为需要去掉障碍物及障碍物所处的部分区域,因此不易于坐标转换。
此外,在识别精度足够高的情况下墙壁上的一些昆虫可能会被识别为障碍物,但昆虫在投射白色图卡后很可能因为投影光线的影响离开。控制器此时再判断存在障碍物,会无故执行避障功能,导致投影效果变差。因此,还需要再投射一次特征图卡以进一步确认障碍物。并且特征图卡上的特征图像也有利于坐标系的建立。通过坐标系的建立便于描述障碍物的位置,也便于去除障碍物后坐标系的重建以及向出光组件坐标系的转换。
对于投影设备已经开始投射媒资数据,并接收到避障指令的情况,出光组件在切换投射媒资数据和投射纯色图卡、特征图卡时需要一定的转换时间,此时会增加用户的等待时间,影响用户的使用体验。为了解决这一问题,控制器23执行以下步骤:
检测输入避障指令时刻,以及出光组件的投射状态。
投影设备需要在接收到避障指令后,立刻执行避障功能。避障功能可以在已经开始投射媒资数据的情况下开启,也可以在未开始投射媒资数据的情况下开启,具体的可以通过检测出光组件的投射状态判断当前投射情况属于哪一种。对于未开始投射妹子数据的情况,可以按顺序的投射纯色图卡和特征图卡,不存在从媒资数据切换到纯色图卡和特征图卡耗费太长时间的情况。
如图13所示,如果出光组件正在投射媒资画面,在待投射的媒资数据流中插入纯色图卡和特征图卡。
对于已经开始投射媒资数据的情况,可以通过在媒资数据流中插入纯色图卡的特征图卡的方式,使得投射纯色图卡和特征图卡的流程顺应媒资数据的播放时间,自然投射,不需要转换投射源,因此无需用户等待太长的时间。
根据纯色图卡和特征图卡在媒资数据流中的插入位置,计算拍摄时间,并控制相机根据拍摄时间拍摄得到第一采样图像和第二采样图像。
为了控制相机能准确拍摄到纯色图卡和特征图卡对应的第一采样图像和第二采样图像,需要对收到避障指令的时间进行记录,以得到初始时间,结合后续图卡插入在媒资数据中的时间,可以计算相机拍摄的时间。在时间到达两次拍摄时间时,相机执行拍摄功能,获得第一采样图像和第二采样图像。
如图14所示,在一些实施例中,投影设备正在播放一段时长为十分钟的视频。在播放到第五分钟时,控制器收到用户输入的避障指令,并记录避障指令时刻为五分钟,以及向视频的数据流的第五分钟第十秒添加纯色图卡、第五分钟第十五秒添加特征图卡。控制器还根据数据流中插入图卡的时间,计算得到相机在十秒钟之后开始第一次拍摄、在十五秒钟之后开始第二次拍摄。在视频播放至五分钟十秒时,出光组件投射纯色图卡、相机拍照得到第一采样图像。在视频播放至五分钟十五秒时,出光组件投射特征图卡、相机拍照得到第二采样图像。
可见,通过在媒资数据流中添加图卡的方式避免了投射源的切换,使得获得采样图像的过程更自然,减少了用户的等待时间,避障功能的执行效率也更高。在获取第一采样图像和第二采样图像之后可以对第一障碍目标进行识别、并进行避障处理。
在第一采样图像中识别第一障碍目标,根据第二采样图像中特征图卡的形状建立变换矩阵。
通过比对第一采样图像和第二采样图像,可以识别出第一障碍目标的位置及其坐标。在第一采样图像中没有特征图像上特征点的干扰可以完整的识别出第一障碍目标的轮廓,并且得到其坐标。第二采样图像上的特征点具有对称、分布均匀等特性,因此有利于建立坐标系得到坐标,得到坐标之后可以进一步的建立相机坐标系与出光组件坐标系的变换矩阵。在利用第二图像上的特征点建立变换矩阵的过程中,控制器23执行以下步骤:
遍历第二采样图像中像素点的颜色值,以及根据颜色值在第二采样图像中识别特征点。
图像由若干个像素点组成,其具体颜色也受多个像素点的颜色值影响,特征点的颜色区别于第一采样图像中的纯色图卡,纯色图卡大多采用白色,则特征图卡中的特征点采用与白色对比明显的颜色即可。通过识别第二采样图像中的像素点的颜色值,即通过识别与白色不同的颜色值,可以识别出特征点。
提取第二采样图像对应特征图卡中的特征分布信息,根据特征点和特征分布信息建立单应性矩阵。
多个特征点根据不同的分布情况可以得到特征图形,且多个特征点有利于建立适用于当前特征图形的坐标系,进一步有利于建立用于相机坐标系和出光组件坐标系转换的单应性矩阵。提取特征点和特征 分布信息后,在建立单应性矩阵的过程中可参考以下公式求出单应性矩阵。其中,由相机拍摄的第二采样图像中的特征点坐标可用矩阵表示为:
(x,y,1)为相机坐标系下的特征点坐标。
(x1,y1,1)为出光组件坐标系下的特征点坐标。
相机坐标系下的坐标转换为出光组件坐标系下的坐标,即可视作为:
H即为单应性矩阵,通过已知的相机坐标和出光组件坐标即可通过计算得到H,计算过程为通用方法,因此不进行进一步说明。
存储单应性矩阵,以获取变换矩阵。
计算得到单应性矩阵H后,将H存储至存储空间,形成用于相机—出光组件坐标转换的变换矩阵,以待调用。单应性矩阵包含的具体数值可能会因特征图卡坐标系的建立方式不同而产生一些变化,但对于相机—出光组件的坐标变换来说并无具体影响。单应性矩阵的实际内容为相机坐标转换到出光组件坐标的转换规则,可以实现对应的转换即可。
S300:按照可投影区域和变换矩阵确定投影区域,以及可控制出光组件将投影内容投射至目标投影区域。
投影内容即媒资数据,投影设备在划分可投影区域的过程中先是在相机拍摄的采样图像上进行划分。投射媒资数据还需要出光组件执行相关动作,因此需要将采样图像上已经重新划分好的可投影区域坐标转换为适用于出光组件投射的可投影区域坐标。在转换的过程中,通过调用变换矩阵将采样图像上的坐标逐个转换为出光组件坐标系下的坐标,以形成投影区域。
继续沿用上述实施例,投影设备在重新划分可投影区域之后,立即调用已经获取的变换矩阵,即用于坐标转换的单应性矩阵。控制器将在采样图像上重新划分的可投影区域的坐标由相机坐标系转换为出光组件坐标系。控制器在获得在出光组件坐标系下的可投影区域坐标后,通过出光组件将媒资数据投射至可投影区域(例如,墙壁)。
在根据第一障碍目标划分可投影区域时,由于会接收到用户指定第一障碍目标的命令,因此需要对用户指定的第一障碍目标进行识别。如图15所示,控制器根据第一障碍目标的识别结果执行相应的动作:
根据用户指定的第一障碍目标的类型调用识别模型,将采样图像输入识别模型。
识别模型为根据样本图像数据训练得到的神经网络模型,控制器解析用户输入的第一障碍目标,通过关键词、关键字等信息判断第一障碍目标所属类型。再根据所属类型调用识别模型判断采样图像中是否存在第一障碍目标。对于第一障碍目标的判定可以不为一个准确值,可以是通过相似度等概率的形式体现,例如识别模型接收采样图像后,判断采样图像中包含第一障碍目标的分类概率。
如果识别结果中包含第一障碍目标,执行根据第一障碍目标在采样图像中的位置划分可投影区域的步骤。
举例来说,用户输入的第一障碍目标为“墙壁上的挂钩”,控制器按照解析关键字“墙壁上的”、“挂钩”——调用“家用挂件”识别模型的顺序对第一障碍目标进行识别,得到采样图像中包含挂钩的几率为90%,即可判断采样图像中存在第一障碍目标——挂钩。并根据挂钩的坐标重新划分可投影区域。
如果识别结果中不包含第一障碍目标,生成第一提示信息,以及控制出光组件投射第一提示信息。
在一些实施例中,识别模型得到采样图像中包含挂钩的几率为10%,判断采样图像中不存在第一障碍目标,并生成用于提醒用户当前投影区域中未识别出第一障碍目标的第一提示信息,第一提示信息可以通过投影的方式呈现也可以通过音频的形式播放。用户接收到提示信息后,可以重新输入指定第一障碍目标的指令,控制器则重新进行避障处理。
用户输入的调整指令除了包括第一障碍目标以外,还可以包括投影参数。投影参数用于限定目标投影区域。如图16所示,控制器在接收到包括投影参数的调整指令时,根据投影参数执行以下步骤:
根据变换矩阵,将可投影区域拟合为有效投影范围。
可投影区域包含投射媒资数据所需的目标投影区域也包括预留出的一些空白区域,所述区域均为控制器通过计算得到的当前情况最优区域。但由于使用的场景及用户需要的情况不同,用户会根据实际需求给出投影参数以调整实际的投影区域。
从调整指令中提取投影参数,根据投影参数,在有效投影范围内划定目标投影区域。
投影参数可以是直接控制画面比例的参数,例如“以长度2米,宽度1米的形式播放媒资数据”。控制器根据投影参数,在有效投影范围内,对实际投影范围进行进一步的调整以符合用户的需求,得到目标投影区域。在一些情况下,用户输入的投影参数可能存在不清楚或超出有效投影范围,控制器需要对投影参数是否符合有效投影范围进行判断并执行以下步骤:
获取有效投影范围的边界尺寸。
如果边界尺寸大于或等于指定画面尺寸,根据指定画面尺寸在有效投影范围中划定目标投影区域。
如果边界尺寸小于指定画面尺寸,生成第二提示信息,以及控制出光组件投射第二提示信息。
有效投影范围的边界尺寸即为投影区域尺寸的最大尺寸,超过边界尺寸则会影响投影的分辨率、比例调整从而导致投射的媒资数据画面失真。而用户在输入包含投影参数的调整指令时,仅通过肉眼观察与自己的需求输入的投影参数会存在超出边界尺寸的可能。因此,在根据用户输入的投影参数生成目标投影区域时,通过对投影参数和边界尺寸对比,判断是否能根据用户需求划分出合适的目标投影区域。
在一些实施例中,用户输入的调整指令包括“投射媒资数据的宽度为2.56米,高度为1.44米”,控制器接收到调整指令后,根据有效投影范围的边界尺寸对投影参数进行判断,发现距离投影参数符合有效投影范围,则根据投影参数确定目标投影区域。
在一些实施例中,用户输入的调整指令中包括“投射媒资数据的宽度为3米,高度为2米”,控制器收到调整指令后并根据有效投影范围对边界尺寸进行判断,发现投影参数已经超过有效投影范围,通过出光组件向墙壁投射“目标投影区域划分失败,投影参数超范围”的第二提示信息。用户在看到超范围的提示后,可以重新输入投影参数以进行再次调整。
用户输入的调整指令中还可以包括指定间隔距离,例如,“在距离衣柜右侧五寸的位置投射媒资数据”、“在距离地面三寸的位置投射媒资数据”。指定间隔距离即在有效投影范围内,预留空白的区域,因此会进一步缩小目标投影区域。此时控制器对是否超出边界尺寸的判定需要将投射的媒资数据的宽度、高度分别加上指定间隔距离,再与边界尺寸进行比较。
根据指定画面尺寸和指定间隔距离计算极值尺寸。
极值尺寸包括最小宽度和最小高度,最小宽度为指定画面尺寸中指定画面宽度与间隔距离中指定横向距离的和;最小高度为指定画面尺寸中指定画面高度与指定间隔距离中指定纵向距离的和。极值尺寸相当于有效投影区域应该具有的最小宽度和最小高度。
获取有效投影范围的有效宽度和有效高度,并进行判断。
如果有效宽度小于最小宽度,和/或,有效高度小于最小高度,生成第三提示信息。
如果有效宽度大于或等于最小宽度,并且,有效高度大于或等于最小高度,按照指定间隔距离和指定画面尺寸在有效投影范围内划定目标投影区域。
在一些实施例中,控制器根据“投射媒资数据宽度为2.56米”,以及“水平方向间隔衣柜2米”的投影参数和指定间隔距离,计算最小宽度为4.56米;根据“投射媒资数据高度为1.44米”,以及“距离地面0.5米”的投影参数和指定间隔距离,计算最小高度为1.94米。当前有效投影范围宽度为4米,高度为2米,控制器调用有效投影范围的参数后,判定宽度超范围、高度符合范围,有效投影范围的边界尺寸不符合用户输入的调整指令,通过出光组件投射“根据调整指令调整投影范围失败”以提示用户重新输入调整指令。同时,用户还会播放“当前最大尺寸为宽度4米、高度2米”的信息以提示用户。
在一些实施例中,有效投影范围宽度为5米,高度为2米,控制器判定宽度、高度均符合范围,有效投影范围的边界尺寸符合用户输入的调整指令,根据调整指令划分出目标投影区域。
通过对调整指令内容的丰富使得投影媒资数据时更符合用户的需求,且增加了用户与投影设备的人机交互过程,有利于提高用户的满意度。在根据调整指令对目标投影区域进行调整时,通过宽度、高度的判断保证投影设备投射媒资数据后的画面质量,在不符合标准范围的基础上不进行投射媒资数据,在符合标准范围的基础上才投射媒资数据。
通过对第一障碍目标的识别,在没有其他障碍目标的情况下,投影设备可以向目标投射区域投射媒 资数据。而在还有其他障碍目标的情况下,如图17所示,控制器还会与用户进行交互以确认是否需要避开其他障碍目标,根据用户输入的指令执行后续动作:
在可投影区域中识别第二障碍目标。
此时的可投影区域为控制器根据第一障碍目标重新划分的投影区域。第二障碍目标为采样图像中除第一障碍目标以外的目标,第二障碍目标的类别可以不同于第一障碍目标,因此控制器需要继续跟用户进行确认是否要执行避障功能。第二障碍目标可以是控制器自主识别的,也可以是用户输入的。识别第二障碍目标的过程与上述实施例中的过程相同,在此不再赘述。
如果可投影区域中存在第二障碍目标,生成第四提示信息,和/或,生成问询指令。
控制器在确认第二障碍目标的存在之后,生成用于提示用户第二障碍目标存在的音频/投影信息。同时,还会向用户发出问询指令以确认是否需要进行避障处理。
获取用户基于第四提示信息和/或问询指令输入的确认指令,根据确认指令重新划定可投影区域。
确认指令为用户向控制器输入的是否进行避障处理的指令。控制器接收到用户的确认避障的指令后,按照上述实施例的流程根据第二障碍目标重新将当前的可投影区域划分为新的可投影区域。
在一些实施例中,控制器结合用户输入的调整指令,避开墙上的挂钩后,生成了可投影区域。同时,识别到墙上的控制开关,立即向用户发出“是否需要避开开关”的音频信息。用户通过语音输入的形式向控制器反馈“需要避开开关”。控制器收到“需要避开开关”的指令后,按照可投影区域的划分方式,划分出新的可投影区域,进一步的结合用户需求与投影标准形成目标投影区域,并将媒资数据投射至目标投影区域。
对于第二障碍目标进行识别,可以保证目标投影区域内的投影质量。并且通过与用户的交互,使得用户的体验感提升。可以理解的是,用户也可以通过语音输入“无需避开开关”的指令,控制器此时可直接向目标投影区域投射媒资数据。
基于同一发明构思,本申请实施例还提供了一种投影画面处理方法,应用于上述投影设备,投影设备包括出光组件、相机以及控制器,如图18所示,投影画面处理方法,包括:
S100:响应于避障指令,获取变换矩阵,以及获取用户输入的调整指令;
其中,变换矩阵为相机与出光组件之间坐标的单应性矩阵;调整指令中包括用户指定的第一障碍目标。
S200:在采样图像中识别第一障碍目标;
其中,采样图像为相机在出光组件投射矫正图像时拍摄获得的图像。
S300:根据第一障碍目标在采样图像中划定可投影区域;
其中,可投影区域为采样图像中除第一障碍目标以外区域所容纳的最大预设宽高比的矩形区域。
S400:按照可投影区域和变换矩阵确定目标投影区域,以及控制出光组件将投影内容投射至目标投影区域。
投影设备在识别到投影区域中有障碍物的情况下,还获取用户输入的用于指定第一障碍目标的指令,以确认在投影过程中是否需要进行避障处理。通过拍摄采样图像,确认第一障碍目标是否存在以及存在时的位置。其中,通过相机与出光组件之间坐标的单应性矩阵将采样图像的坐标转换至出光组件坐标系下,投影设备在投射媒资数据时根据第一障碍目标的位置重新划分投影区域,并将媒资数据投射至重新划分的投影区域。投影设备在识别到障碍物时,结合用户的避障指令,在障碍物不影响投影效果时不执行自动避障功能,保证了投影效果提高了用户体验。
在一些实施例中,如图19和图20所示,在用户使用投射设备2投射投影内容的过程中,由于环境的复杂性,不可避免的会存在投影设备的出光组件与投影墙面或幕布(投影面1)不垂直使投影画面变形,以及投影区域有障碍物造成投影画面被障碍物遮挡,十分影响用户体验。
对此,投影设备通常会设计有自动避障和自动矫正功能,投影设备需要在投影时可以自动检测障碍物来避开障碍物投影,并使用自动矫正功能重新对投影画面进行矫正。自动矫正的功能是利用投影投射特征图卡时,不判断特征点的空间位置,直接通过相机22拍照识别特征点然后三维重建拟合投影面实现自动矫正。
但若投影墙面不止一块时,或是一部分特征点投射到别的物体而非平面时,此时投影投射在多个区域,如果在多个区域都识别到特征点时会产生拟合平面误差,导致自动矫正后的形状不再是矩形,矫正成错误的结果。
为了能够在投影的过程中,投影面1被障碍物遮挡时,依然能投射出矫正好的投影图像,本申请实施例还提供了一种投影设备,如图21所示,投影设备可以包括出光组件21、相机22和控制器23。其中,出光组件21用于投射投影内容至投影面。相机22用于拍摄采样图像。控制器23被配置为:
S100:响应于投影画面矫正指令,控制出光组件投射矫正图像。
投影设备可以获取用户输入的投影画面矫正指令,并根据投影画面矫正指令,控制出光组件投射矫正图像。投影画面矫正指令可以是用户通过投射设备的控制装置(遥控器等)上的按键发出,也可以通过与投影设备建立通信连接的移动终端(智能手机、便携式电脑等)发出。
矫正图像包括纯色图卡和特征图卡。其中,矫正图像为纯色图卡时,投影设备会通过出光组件21向投影面投射纯色投影画面。在出光组件21向投影面投射纯色图卡时,纯色图卡中的部分光线会照射在位于出光组件21和投影面之间的障碍物上,在投影面的对应位置,会出现因为障碍物的遮挡所形成的阴影区域。控制器23可以识别纯色图卡中的阴影区域即可识别障碍物的位置,并根据障碍物的位置执行后续避障功能。
需要说明的是,在一些实施例中,为了能够更明显的识别出阴影区域,纯色图卡应为浅色,例如浅黄色、浅蓝色、白色、灰色等,浅色的定义为颜色小于或者等于1/12染料颜色标准深度的颜色。本申请实施例对纯色图卡的颜色不做其他限制。
矫正图像为特征图卡时,出光组件21向投影面投射带有若干特征点的投影画面。特征点代表投影画面的预设区域范围的特征,用于后续对投影画面执行自动矫正功能。
S200:获取相机对纯色图卡拍摄获得的第一采样图像,以及对特征图卡拍摄获得的第二采样图像。
控制器23还可以控制相机22对纯色图卡以及特征图卡进行拍摄,以获取由纯色图卡拍摄获得的第一采样图像和由特征图卡拍摄获得的第二采样图像。在一些实施例中,投影设备的相机22可以为内置或者外接安装,相机22能够对投影设备通过出光组件21投射的投影画面进行拍摄,以得到采样图像。在通过相机22进行拍摄后,还可以对相机22所拍摄的采样图像进行清晰度检测,确定投影设备的焦距是否合适。如果检测采样图像的清晰度较低时,调整投影设备的焦距,并通过相机22重新拍摄采样图像,按照拍摄的时间顺序对比采样图像的清晰度,确定采样图像的清晰度最高时,投影设备的焦距参数。
在一些实施例中,确定投影设备的焦距参数的过程时,还可以在调整焦距参数的过程中,切换相机22至连拍模式,并在拍摄的采样图像中,添加可去除的水印,水印内容即为焦距参数。通过所有的采样图像的清晰度比对,确定采样图像的清晰度最高时,投影设备的焦距参数。
例如,控制器23获取到的第一采样图像,如图22所示,第一采样图像中存在矩形形状的障碍物31和圆形形状的障碍物32。
S300:根据第一采样图像确定特征轮廓区域。
其中,特征轮廓区域为第一采样图像中面积最大的轮廓区域或用户指定的轮廓区域。控制器23可以根据障碍物31和障碍物32在第一采样图像中的位置,进而确定特征轮廓区域,以保证投影设备投射出的投射画面避开上述障碍物31和障碍物32,使用户看到不被障碍物遮挡的投影画面。
在一些实施例中,特征轮廓区域可以是第一采样画面中面积最大的轮廓区域或者用户指定的轮廓区域。控制器23可以根据投影设备的避障开关的设置状态来确定选择上述之一的轮廓区域作为特征轮廓区域。因为出光组件21和投影面1之间存在的障碍物会在投影图像上形成阴影区域,而阴影区域会将原来的投影画面分裂成多个没有阴影区域的轮廓区域。此时,控制器23可以在第一采样区域中提取所有的轮廓区域,并检测投影设备的避障开关的设置状态。避障开关的设置状态包括开启和关闭,设置状态可以是使用投影设备的用户人为设定开启和关闭,也可以是投影设备在识别投影画面中存在障碍物时,自动切换为开启状态。
投影设备处于避障状态时,需要将投影画面避开障碍物进行投射,如果设置状态为开启时,控制器23会遍历第一采样图像中各轮廓区域的面积,筛选出备选轮廓区域,备选轮廓区域用于用户指定作为特征轮廓区域。在筛选的过程中,除了轮廓区域以外,还可以设定用于筛选的其他参数。例如,为了使用户能够更清晰的观看投影画面,控制器23设置轮廓区域与用户之间的距离作为筛选参数。又例如,轮廓区域的形状为不规则图形,而投影需要投射至矩形区域内,此时控制器23还需要在每个轮廓区域内设定最大矩形面积,并以最大矩形面积作为筛选参数。
图像都是由像素点组成,在一些实施例中,控制器23还可以根据第一采样图像中的像素点颜色值来划定轮廓区域。由于纯色图卡的颜色为浅色,而经过障碍物遮挡后的阴影区域通常为黑色,二者在色彩 上存在很大差别。因此,控制器23还可以设置色差阈值,并遍历第一采样图像中的像素点颜色值,获取相邻像素点色差值大于或等于色差阈值的像素点,并根据相邻像素点色差值大于或等于色差阈值的像素点识别边界图形。边界图形即为出光组件21与投影面1之间的障碍物的形状图形。
控制器23在识别到边界图形后,会根据第一采样图像的边缘,划定轮廓区域。这样,对于障碍物遮挡所形成的阴影区域内部,像素点都为黑色,相邻像素点色差值较小,不会达到色差阈值,而对于阴影区域轮廓部分的像素点为黑色,轮廓外的像素点为纯色图卡的颜色,与黑色存在较大色差,所以可以准确识别出障碍物遮挡所形成的阴影区域,精确划定轮廓区域。
在一些实施例中,控制器23还可以将相机22拍摄得到的第一采样图像输入至识别模型中。识别模型是根据样本图像训练获得的神经网络模型,识别模型可以通过大量带有障碍物的采样图像以及没有障碍物的采样图像训练至收敛得到。控制器23将第一采样图像输入至识别模型后,会得到识别模型输出的识别结果。识别结果为第一采样图像包含障碍目标的分类概率,可以根据分类概率判断第一采样图像中是否包括障碍目标。如果识别结果为包含障碍目标,说明出光组件21和投影面1之间存在障碍物,控制器23会从第一采样图像中剔除障碍目标对应的轮廓区域,以及在剔除障碍目标后的第一采样图像中确定特征轮廓区域。如果识别结果为不包含障碍目标,说明出光组件21和投影面1之间不存在障碍物,控制器23会根据第一采样图像确定特征轮廓区域。
在一些实施例中,如果避障开关的设置状态为关闭,为了在避开障碍物的同时使用户能够看清投影画面中的投影内容,控制器23会在遍历第一采样图像中各轮廓区域的面积之后,筛选出轮廓区域的面积最大的作为特征轮廓区域。
在一些实施例中,上述筛选轮廓区域的参数还可以设置优先级。例如,优先选择各个轮廓区域的面积作为第一筛选参数,如果轮廓区域相同,再选择轮廓区域与用户之间的距离作为第二筛选参数。以上仅为本实施例的示例性说明,在设置优先级时,筛选参数可以相互调换,本申请部分实施例对此不做具体限制。
在一些实施例中,控制器23在筛选轮廓区域之前,还可以根据筛选参数设置参数阈值。例如,当以轮廓区域的面积作为筛选参数时,可以设置面积参数阈值,如果轮廓区域的面积大于参数阈值,那么,说明该轮廓区域符合筛选条件,可以作为备选轮廓区域;如果轮廓区域的面积小于参数阈值,说明该轮廓区域的面积过小,用户无法在指定距离看清投影画面中的投影内容,不符合筛选条件,不可以作为备选轮廓区域。
在控制器23在筛选备选轮廓区域之前,还可以根据遍历第一采样图像中各轮廓区域的面积生成轮廓区域列表。如果轮廓区域列表中的某一轮廓区域没有符合筛选条件时,控制器23会在轮廓区域列表中剔除对应的轮廓区域。在对全部的轮廓区域进行筛选后,轮廓区域列表中均为符合筛选条件的备选轮廓区域,此时得到备选轮廓区域列表,以供用户指定作为特征轮廓区域。
在对轮廓区域进行筛选的过程前,控制器23还可以根据备选轮廓区域列表的数量,例如,设置备选轮廓区域列表的数量为3个。此时,控制器23则会在遍历第一采样图像中各轮廓区域的面积后,按轮廓区域的面积大小筛选出三个备选轮廓区域,并按照默认或用户指定的顺序排列在备选轮廓区域列表中。
S400:按照特征轮廓区域在第二采样图像中提取特征点。
在一些实施例中,在遍历第一采样图像中的各轮廓的面积,筛选出备选轮库区域后,控制器23还会控制出光组件21向投影面1投射特征图卡,并在出光组件投射特征图卡后,控制相机22拍摄特征图卡,以得到第二采样图像,图21示出了本申请实施例的一种第二采样图像。然后,控制器23还可以根据对第一采样图像筛选得到的备选轮廓区域,识别第二采样图像中同一个轮廓区域内的特征点,并计算第二采样图像中位于同一个备选轮廓区域内特征点的平均深度。特征点指的是图像灰度值发生剧烈变化的点或者在图像边缘上曲率较大的点,即两个边缘的交点。图23中的环状矩形即为特征图卡上的特征点。
计算特征点的方法可以为几何三角化、反深度(inverse depth)、粒子滤波法等。在计算第二采样图像中位于同一个备选轮廓区域内特征点的平均深度的过程中,控制器23还可以遍历备选轮廓区域中特征点的颜色值。特征点轮廓的颜色应于纯色图卡的色差较大,以便能够很清晰的体现出特征点轮廓以及特征点位置。在一些实施例中,为了区别于障碍物在投影面上形成的阴影,特征点的颜色可以为除了黑色以外的任何深色,特征点的颜色可以不统一。深色的定义为颜色大于1/12染料颜色标准深度的颜色。本申请实施例对特征点的颜色不做其他限制。
图24示出了第二采样图像中的两个备选区域轮廓。在图24中,第二采样图像内两个虚线所示区域即 为备选区域轮廓,其中,障碍物32左侧的备选轮廓区域定义为第一备选轮廓区域,障碍物32右侧的备选轮廓区域定义为第二备选轮廓区域。控制器23在计算特征点的平均深度时,会基于同一备选轮廓区域内特征点进行计算。对于第一备选轮廓区域,控制器23会根据第一备选轮廓区域内的9个特征点计算平均深度,对于第二备选轮廓区域,控制器23会根据第二备选轮廓区域内的3个特征点计算平均深度。
在一些实施例中,控制器23计算第二采样图像中位于同一个备选轮廓区域内特征点的平均深度时,还可以遍历备选轮廓区域中特征点的颜色值。在控制器23控制出光组件21投射特征图卡时,因为特征点的颜色值与纯色图卡中的颜色值存在明显差别,所以能够显现出特征点的位置。
由于环境复杂性,出光组件21将投影画面投射到投影面1时可能存在多种情况。如果投影面1为墙体时,同一投影画面可能会投射到远近不同的两面墙体上,此时,投射到距离投影设备较近的墙体所显示的画面比投射到距离投影设备较远的墙体所显示的画面大,就会造成画面大小不一,投影形状变形。如果投影面1为投影的幕布时,由于投影设备的摆放问题,可能会出现呈现投影画面变形等问题。例如,投影设备摆放时上下方向高低不一致或者投影机偏左或偏右放置导致投影画面呈梯形。
对此,控制器23还可以根据多个特征点的颜色值,提取投影形状。在此过程中,为了更快提取投影形状,控制器23可以通过获取备选区域轮廓中的边界特征点,边界特征点是能够体现备选轮廓区域形状的特征点。例如,当备选轮廓区域为矩形时,可以提取矩形的四个边角对应的四个特征点,并根据四个特征点提取投影形状。也可以提取矩形的两个对角对应的两个特征点,根据两个特征点提取投影形状。
因为投影设备的投影区域大多为矩形,但是在出光组件21与投影面1之间存在障碍物时,障碍物在投影画面中所形成的阴影区域会将投影画面分割为不规则形状。对此,控制器23还可以在筛选出备选轮廓区域后,在备选轮廓区域划定最大矩形区域,并根据备选轮廓区域的最大矩形区域作为备选轮廓区域中的有效投影区域。
在控制器23提取到投影形状之后,控制器23还可以获取相机22和出光组件21的硬件参数。其中,相机22的硬件参数包括感光度、白平衡、测光、对焦、曝光补偿和焦距等。在相机22设置感光度之前,控制器23还可以通过相机获取当前环境的光线,并根据光线的强弱进行感光度调整。为了更好的观看效果,通常在投影时,投影设备所处的环境的光线较弱。因此,相机22的感光度需要调整到更高的数值。
对焦模式可以为单点自动对焦AF-S、伺服自动对焦AF-C、智能自动对焦AF-A和手动对焦。
相机22在设置曝光补偿时,还可以检测照片的亮度,当照片亮度适中的时候,保持对着“0”处即可;当照片偏暗时,增加曝光补偿;当照片偏亮时,降低曝光补偿。
在相机22设置焦距时,想要拍摄到更宽广的视野画面时,将镜头变焦环调到最小即可;想要拍摄到远处的画面时,只需要拉长镜头即可。同时较广的焦距拍摄出来的照片景深更大,较长的焦距拍摄出来的照片景深较浅。
出光组件21的硬件参数包括分辨率、投影亮度、对比度、对焦方式和显示比例等。
在一些实施例中,控制器23在获取相机22和出光组件21的硬件参数的同时,还可以获取第二采样图像中特征图卡上的标准形状。出光组件21在垂直将特征图卡投射到整齐的投影面1时,投影形状与标准形状相同。此时,控制器23可以根据备选轮廓区域中特征点计算出光组件到各特征点的距离,即为特征点的深度。
但是,对于凹凸不平的投影面1来说,特征图卡会投射在距离不同的平面上,对应的,特征图卡上的特征点所位于的平面也不同。对于同一个备选轮廓区域内特征点,由于投影面1中,各平面与光机200的距离不同,所以特征点的深度也不同。控制器23会根据由多个特征点的颜色值所提取的投影形状,第二采样图像中特征图卡上的标准形状,以及相机22和出光组件21的硬件参数来分别计算多个特征点与出光组件21之间的距离。并计算这些距离的平均值,以获得平均深度。
在计算的过程中,控制器23可以分别计算投影面1中同一平面的特征点的平均深度。在计算之前,控制器23还可以通过识别投影面1的特征轮廓来获取投影面1中所包含的平面数量。如图25所示,出光组件21将特征图卡投射在投影面1上,在投影面1中存在两个凸起的墙面,将投影面1分为了5个平面,其中3个平面与出光组件21位于同一距离,另两个平面距离出光组件21较近。
对于上述5个平面,控制器23可以判断各备选轮廓区域分别包含的那些平面,进而计算所包含平面内的特征点的深度,并根据特征点的深度计算该平面内的特征点的平面平均深度。在计算完备选轮廓区域全部包含的平面平均深度后,再根据平面平均深度计算备选轮廓区域的特征点的平均深度。
在一些实施例中,如果投影面1中,所在的其中一个平面面积较小,平面内不存在特征点,此时, 控制器23可以将该平面的区域划分至相邻的平面内,以确保投影面1内的每个区域都被计算。
在计算出特征点的平均深度后,对于各个备选区域轮廓,控制器23还可以计算各个备选区域轮廓的面积,以及第二采样图像的面积,并根据备选区域轮廓的面积和第二采样图像的面积计算备选轮廓区域相对于第二采样图像的面积比例。该面积比例可以根据数值由大到小排序。在计算出备选轮廓区域内特征点的平均深度和备选轮廓区域相对于第二采样图像的面积比例后,控制器23还可以根据上述两项数据生成第一提示信息,并在避障开关的设置状态开启时,控制出光组件21投射第一提示信息。第一提示信息中包括各个备选轮廓区域,以及各个备选轮廓区域对应的特征点的平均深度和面积比例。
用户可以通过投射到投影面1的第一提示信息,查看到可选择的备选轮廓区域。并在可选择的备选轮廓区域选择其中一个作为特征轮廓区域。基于第一提示信息,用户可以通过投影设备的控制装置上的按键生成选中命令。控制器23在接收到选中指令后,响应于选中指令,将选中指令中指定的备选轮廓区域标记为特征轮廓区域。
在一些实施例中,第一提示信息可以以列表的形式显示,列表中包括可选择的备选轮廓区域,以及对应备选轮廓区域的特征点的平均深度和面积比例。用户在通过控制装置选中备选轮廓区域时,控制器23还可以控制出光组件21将所选中的备选轮廓区域的轮廓部分进行标记,以便用户能够更直观的看到备选轮廓区域的面积。控制器23可以选用与特征图卡或者特征点的颜色不同的其他颜色来作为标记颜色,本申请实施例对标记颜色不做具体限制。
S500:基于特征点计算投影面与出光组件之间的夹角,以及控制出光组件根据夹角投射投影内容至投影面。
在确定特征轮廓区域后,控制器23会根据特征轮廓区域在第二采样图像中提取特征点,并根据特征点,计算投影面1与出光组件21之间的夹角,并根据夹角,控制出光组件21向投影面1投射矫正后的投影内容。
在一些实施例中,控制器23在基于特征点计算投影面1与出光组件21之间的夹角的过程中,还可以控制相机22调出相机坐标系,并获取特征轮廓区域内的特征点在相机坐标系下的特征点坐标。特征点坐标通常为特征点所在图形的中心,例如,特征点为正方形或矩形时,特征点坐标即为正方形或矩形中心的坐标,特征点为圆形时,特征点坐标即为圆心的坐标。
为了便于相机22拍摄采样图像,相机22通常与出光组件21一起设置在投影设备的正前方。在获取相机坐标系下的特征点坐标后,控制器23还需要控制出光组件21调出出光组件坐标系。并根据相机22和出光组件21的硬件参数,将特征点坐标转化为在出光组件坐标系下的出光点坐标。硬件参数可以为相机22的镜头圆心和出光组件21的圆心的矢量位移值。
在一些实施例中,控制器23可以先获取相机22的相机镜头圆心坐标,如图26所示,X1Y1坐标系为出光组件坐标系,X2Y2为相机坐标系。当相机22的相机镜头圆心在相机坐标系的坐标为(0,0),然后将上述坐标转化在出光组件坐标系下,并重新对相机镜头圆心的坐标定位。示例性的,重新定位后的坐标为(30,-40)。那么控制器23就可已根据重新定位前后的坐标计算出相机22的镜头圆心和出光组件21的圆心的矢量位移值为50。在计算出矢量位移值后,控制器23可以根据矢量位移值将相机坐标系中所有的特征点坐标全部转化为在出光组件坐标系下的出光点坐标。
将特征点坐标转化为在出光组件坐标系下的出光点坐标后,控制器23可以根据多个出光点坐标,在出光组件坐标系下的拟合新的投影面1。在拟合出新的投影面1后,控制器23会计算投影面1与出光组件21对应的出光面之间的夹角,并根据夹角,控制出光组件21向投影面投射矫正后的投影内容,以使用户能够看到矫正后的投影画面。
在一些实施例中,在控制出光组件21根据夹角投射投影内容至投影面1的过程中,控制器23还可以获取出光组件21的运行参数,并根据运行参数以及投影面1与出光组件21之间的夹角计算可投影区域。其中,运行参数可以包括投射距离、出光组件21的焦距或分辨率等。例如,控制器23可以根据投射距离计算在垂直(90°)投射时,投影画面的面积。然后根据投影面1与出光组件21之间的夹角,确定该夹角对应的三角函数值。最后,根据投影画面的面积和三角函数值计算投影画面的可投影区域。
为了适用于移动终端或者智能电视等播放设备,通常投影设备投射的投影画面为矩形,当可投影区域为不规则图形或非矩形图形时,控制器23还可以在可投影区域中划定目标投影区域,目标投影区域为可投影区域中的最大内接矩形区域,矩形区域具有预设宽高比。目标投影区域可以适配与横屏或竖屏播放的投影画面。如图27所示,在划定目标投影区域之前,控制器23还可以对所要投射的视频或图片播放 源进行识别,提取出上述投影内容的画面播放比例。通常播放比例包括4:3、16:9、2.39:1或1.85:1等。基于所提取的画面播放比例与矩形区域的宽高比进行比对,确定目标投影区域是横向的最大内接矩形区域还是纵向的最大内接矩形区域。
图28示出一种根据避障指令划分目标投影区域的流程图,具体包括如下步骤:
S2801:计算可投影区域;
S2802:判断是否识别到避障指令;在识别到避障指令时执行S2803~S2804,在未识别到避障指令时执行S2805~S2806;
S2803:将避障状态设置为开启;
S2804:在第二采样图像中提取特征点划定目标投影区域;
S2805:将避障状态设置为关闭;
S2806:根据出光组件的出光面顶点坐标划定目标投影区域。
在划定目标投影区域时,控制器23还可以检测用户输入的用于启动避障功能的避障指令。如果检测到避障指令,控制器23将投影设备的避障状态切换为开启,并根据在第二采样图像中提取特征点划定目标投影区域。而如果没有检测到避障指令,投影设备的避障状态切换依然为关闭,控制器23根据出光组件21的出光面的顶点坐标划定目标投影区域。
划定目标投影区域后,控制器23还可以将目标投影区域坐标逆变换至出光组件的出光面,并控制出光组件,按照逆变换后的目标投影区域坐标投射投影内容。
基于同一发明构思,本申请实施例还提供一种投影画面处理方法,应用于投影设备,投影设备包括出光组件、相机以及控制器,如图29所示,投影画面处理方法包括:
S100:响应于投影画面矫正指令,控制出光组件投射矫正图像;
其中,矫正图像包括纯色图卡和特征图卡。
S200:获取相机对纯色图卡拍摄获得的第一采样图像,以及对特征图卡拍摄获得的第二采样图像。
S300:根据第一采样图像确定特征轮廓区域;
其中,特征轮廓区域为第一采样图像中面积最大的轮廓区域或用户指定的轮廓区域。
S400:按照特征轮廓区域在第二采样图像中提取特征点。
S500:基于特征点计算投影面与出光组件之间的夹角,以及控制出光组件根据夹角投射投影内容至投影面。
由上述方案可知,投影设备在接收到投影画面矫正指令后,控制出光组件投射初色图卡和特征图卡的矫正图像。并获取相机对纯色图卡拍摄的第一采样图像和相机对特征图卡拍摄的第二采样图像。根据第一采样图像确定特征轮廓区域,按照特征轮廓区域在第二采样图像中提取特征点。基于从第二采样图像中所提取的特征点计算投影面与出光组件之间的夹角,出光组件根据夹角投射投影内容至投影面,以确保在投影面被障碍物遮挡时,投影设备能够根据避障后的特征轮廓区域提取特征点,在投影面投射出矫正好的投影图像,以提高用户使用投影设备时的体验感。
在一些实施例中,当用户开启投影设备2后,投影设备2可以将用户预先设置好的内容投射到投影面中,投影面可以是墙面或者幕布,投影面中可以显示出投影图像,以供用户进行观看。
在一些实施例中,投影设备包括出光组件,用于投射投影内容至投影面,出光组件包括激光光源,光机以及镜头。光机内设置有数字微镜元件(Digital Micromirror Device,简称DMD),是投影设备的核心成像器件,用于画面成像。具体的,激光光源为光机提供照明,光机中DMD按照DMD平面的投影坐标调节光源光束,并输出至镜头进行成像,投射至投影介质形成投影画面。
当投影画面投射至投影面时,投影面有障碍物存在或者开启自动避障功能后没有避开障碍物(障碍物识别错误或是不完全避障),此时投影设备可以通过画面移动功能来避开障碍物;另外由于避障区域不受用户控制,当在投影正中心存在障碍物时,避障策略将投影避障在左侧,而用户期望在右侧,此时也可以通过投影设备的画面移动功能解决;可以通过画面缩放与画面移动来调节至满足用户需求的位置与大小。
在一些实施例中,画面缩放与画面移动是通过调节DMD平面的投影坐标来实现的,但当投影设备投影倾斜时,为了使投射到投影面的投影形状保持为矩形,因而需要投影设备通过自动梯形校正,调整DMD平面的投影坐标为梯形。此时如果直接移动光机DMD平面的投影坐标,会导致投射到投影面的投影形状发生变化,降低用户体验。
如图30所示,为DMD平面的投影坐标移动的示意图。位置1的梯形为投影设备侧投时自动梯形校正后,投射到投影面的投影形状为矩形时DMD平面的投影坐标形状,当执行画面移动指令后,DMD平面位置1的梯形移动至位置2。由于在位置1时投射到投影面的投影形状为矩形,当移动到位置2时,此时位置1的梯形右边与位置2的梯形左边重合,且位置1的梯形右边长度大于位置2的梯形左边长度。显然,当移动到位置2时,位置2的梯形左边长投射到投影面时的投影长度必然小于位置1的梯形右边长投射到投影面时的投影长度。因而当投影设备侧投时,在DMD平面按照梯形的投影坐标直接平移,会使投射到投影面的投影形状发生形变。
画面移动的原理是通过控制图像在DMD平面的不同投影坐标上显示,进而实现投影画面移动的效果。如图31所示,为投影设备正投时DMD平面的投影坐标移动的示意图。DMD有2K、4K、8K等尺寸,以2K为例,在投影设备正常投影无校正,无避障,无缩放等裁剪画面显示时,DMD平面的最大投影坐标为(0,0),(1920,0),(1920,1080),(0,1080),按照该投影坐标投射即呈现最大投影。此时由于没有画面裁剪,投影画面无法平移,也就是说,投影画面的平移必须是DMD平面的投影坐标必须小于最大投影坐标。出光组件按照DMD平面的最大投影坐标投射投影内容时,投影的区域最大,即最大投影区域,而投影画面移动需在该最大投影区域之内移动。例如,在投影设备正投时,出光组件与投影面无夹角,此时DMD平面的投影坐标为矩形,投射到投影面的投影形状也为矩形,因而投影画面在移动前是矩形,移动后也是矩形。而当投影设备侧投时,如图32所示,为投影设备侧投时DMD平面的投影坐标移动的示意图。为使投射到投影面的投影形状为矩形需要通过自动梯形校正,调整DMD平面的投影坐标为梯形,相应的,为了保证投影画面在移动前后均为矩形,移动后投影坐标的梯形形状与移动前投影坐标的梯形形状也不一致。
为了解决投影设备在侧投下进行画面移动时投影画面产生形变的问题,本申请实施例还提供一种投影设备,如图33所示,包括出光组件21和控制器23。控制器可以根据DMD平面的投影坐标,并基于画面移动指令进行调整,以获取DMD平面移动后的投影坐标,按照移动后的投影坐标将投影内容投射至投影面,以解决移动时投影画面产生形变的问题。如图33所示,控制器23可以用于执行该投影画面移动对应的程序步骤,包括以下内容:
S100:获取用户输入的画面移动指令。
其中,画面移动指令包括移动方向和移动距离。用户输入画面移动指令的方式可以包括多种,输入画面移动指令的方式可以包括手动输入和语音输入。其中,手动输入可以通过投影设备上的实体按键,或者投影设备配套遥控器上的实体按键进行输入。例如,用户输入移动距离时,可以通过投影设备配套遥控器上的上、下、左、右键输入移动方向和移动距离,进而控制投影画面移动。语音输入可以通过识别用户的语音,生成画面移动指令。例如,用户可以在接通投影设备的电源后,按下投影设备或投影设备配套遥控器上的画面移动按键,然后输入语音,例如,左移三步,投影设备识别语音中的移动方向和移动距离,即获取画面移动指令。
在一些实施例中,由于用户输入多个移动方向或者投影设备在识别用户语音时识别出多个移动方向,导致获取到的画面移动指令中存在多个方向的情况,例如左移和右移同时存在,或者上移和下移同时存在,进而不能明确投影画面移动的方向。
因此,在获取到用户输入的画面移动指令后,控制器23可以响应于画面移动指令,检测画面移动指令中的移动方向;如果画面移动指令中同时存在相反的移动方向,生成第二提示信息,以及控制出光组件21投射第二提示信息。
例如,用户输入画面移动指令后,控制器23对画面移动指令进行检测,当检测到同时存在相反的移动方向,则生成用于提示用户输入的移动方向异常的第二提示信息,并控制出光组件21将第二提示信息投射至投影面1,第二提示信息可以为“移动方向异常,请重新输入”等用于提示用户重新输入画面移动指令的提示,使得用户重新输入画面移动指令,控制器23重新对画面移动指令进行检测,如果未检测到移动方向异常的情况,则可根据该画面移动指令,执行后续的投影画面移动操作。
S200:响应于画面移动指令,获取当前出光面的顶点坐标以及投影面与出光面之间的旋转矩阵。
出光面为出光组件21中用于画面成像的核心成像器件DMD的平面。当前出光面的顶点坐标为投影设备自动避障或校正后画面缩放的出光面的顶点坐标。
在一些实施例中,投影设备可以自动对投影区域进行障碍物检测,并通过障碍物检测结果确定投影区域中没有障碍物后投射投影图像,从而实现自动避障功能。也就是说,如果投影区域中存在障碍物, 投影设备在执行自动避障过程之前的投影区域与完成避障过程的投影区域是不同的。
在一些实施例中,当投影设备的投射角度、及至投影面距离发生变化时,会导致投影图像发生形变,为了保证投影图像为矩形,可以通过投影设备进行自动梯形校正。
因此,通过投影设备的自动避障或自动梯形校正后,其DMD平面的顶点坐标会发生变化。投影设备中可以预先配置存储模块,存储模块可以对投影设备的自动避障和自动梯形校正的情况进行实时记录及存储。
当获取到画面移动指令后,投影设备可以从存储模块中读取投影面1与出光组件21的夹角,计算投影面1与出光面之间的旋转矩阵。在一些实施例中,如图34所示,投影设备还包括相机22,相机22被配置为拍摄投影内容图像;控制器23还被配置为执行以下内容:
S3401:获取相机拍摄的投影图像;
S3402:识别在相机坐标系下投影图像中的特征点坐标;
S3403:根据出光组件和相机的硬件参数,将特征点坐标转换至出光组件坐标系;
S3404:根据出光组件坐标系下的特征点坐标拟合投影平面;
S3405:计算投影平面与出光面的夹角;
S3406:根据投影平面与出光面的夹角,构建旋转矩阵。
在确定旋转矩阵的步骤中,控制器23可以控制出光组件21投射标定图卡至投影面,然后控制相机22拍摄标定图卡的投影图像,相机22可以为双目相机或RGBD等相机,基于双目相机或RGBD等相机,可获取投影图像中标定图卡特征点在相机坐标系下的坐标。然后根据出光组件21与相机22之间的外参,控制器23可实现将特征点坐标从相机坐标系转换至出光组件坐标系,然后将出光组件坐标系下所有特征点进行拟合,拟合投影平面,进而拟合投影平面与出光面的夹角,根据投影平面与出光面的夹角,可计算投影面与之间的旋转矩阵。
S300:根据旋转矩阵与当前出光面的顶点坐标计算当前投影区域。
控制器23可以将在出光面坐标系下的当前出光面的顶点坐标,基于所获取的转换矩阵转换为投影面坐标系下的坐标,即可得到当前的实际投影区域的实际位置坐标值。
例如,基于投影面与出光面之间的旋转矩阵,将当前出光面的顶点坐标转换至投影面坐标系下时,如图35所示,对应的坐标为A(x1,y1),B(x2,y2),C(x3,y3),D(x4,y4),即当前投影区域的顶点坐标。具体的,将当前出光面的顶点坐标代入以下公式,计算当前投影区域:
其中,M为出光组件的硬件参数,即内参,R为旋转矩阵。
S400:根据当前投影区域的顶点坐标、移动方向和移动距离计算顶点移动距离。
获取到当前投影区域后,可以根据用户输入的画面移动指令,计算按照画面移动指令移动后的投影区域。
在一些实施例中,可以根据移动方向和移动距离移动当前投影区域的顶点坐标,得到移动后的投影区域的顶点坐标,以确定移动后的投影区域。由于随着投影距离的变化,投影面的大小也随着变化,因而顶点坐标的顶点移动距离的选取不能采用固定的长度来表示。因此,如图36所示,控制器23还被配置为执行以下内容:
S3601:获取移动方向归属的移动方式;如果移动方式为左右移动执行S3602~S3603,如果移动方式为上下移动执行S3604~S3605;
S3602:根据当前投影区域的顶点坐标计算投影宽度比;
S3603:按照投影宽度比和移动距离计算顶点移动距离;
S3604:根据当前投影区域的顶点坐标计算投影高度比;
S3605:按照投影高度比和移动距离计算顶点移动距离。
在一些实施例中,投影宽度比根据当前投影区域的实际宽度计算得到,投影高度比根据当前投影区域实际高度计算得到,因此,控制器23还被配置为执行以下内容:
如果移动方式为左右移动,计算当前投影区域的第一横坐标差值和第二横坐标差值,第一横坐标差值为当前投影区域上边长两个顶点的横坐标之差,第二横坐标差值为当前投影区域下边长两个顶点的横 坐标之差。
例如,如图35所示,第一横坐标差值width1为当前投影区域上边长两个顶点A,B的横坐标之差;第二横坐标差值width2为当前投影区域下边长两个顶点C,D的横坐标之差。
根据第一横坐标差值和第二横坐标差值计算投影宽度比,投影宽度比为第一横坐标差值和第二横坐标差值的均值,即(width1+width2)/2。
同理,如果移动方式为上下移动,计算当前投影区域的第一纵坐标差值和第二纵坐标差值,第一纵坐标差值为当前投影区域左边长两个顶点的纵坐标之差,第二纵坐标差值为当前投影区域右边长两个顶点的纵坐标之差。
例如,如图35所示,第一横坐标差值height1为当前投影区域上边长两个顶点A,D的纵坐标之差;第二横坐标差值height2为当前投影区域下边长两个顶点B,C的纵坐标之差。
根据第一纵坐标差值和第二纵坐标差值计算投影高度比,投影高度比为第一纵坐标差值和第二纵坐标差值的均值,即(height1+height2)/2。
在一些实施例中,获取移动方向归属的移动方式。如果移动方式为左右移动,根据以下公式计算顶点移动距离:
其中,width1为第一横坐标差值,width2为第二横坐标差值,step为用户输入的移动距离(步数)。
在一些实施例中,获取移动方向归属的移动方式。如果移动方式为上下移动,根据以下公式计算顶点移动距离:
其中,height1为第一纵坐标差值,height2为第二纵坐标差值,step为用户输入的移动距离(步数)。
S500:按照顶点移动距离和当前投影区域的顶点坐标计算目标投影坐标。
针对于不同的移动方式,其移动当前投影区域的顶点坐标也不同。在当前投影区域左右移动时,其当前投影区域顶点的纵坐标不会变化。在当前投影区域上下移动时,其当前投影区域顶点的横坐标不会变化。因此,为了提高响应速度,如图37所示,控制器23还被配置为执行以下内容:
S3701:获取移动方向归属的移动方式;如果移动方式为左右移动执行S3702,如果移动方式为上下移动执行S3703;
S3702:将当前投影区域顶点的横坐标按照移动方向移动顶点移动距离;
S3703:将当前投影区域顶点的纵坐标按照移动方向移动顶点移动距离;
S3704:得到目标投影坐标。
例如,如果移动方式为左右移动,只需移动当前投影区域顶点的横坐标,如果移动方向为左移,将当前投影区域顶点的横坐标减去顶点移动距离,即目标投影坐标为:
A’(x1-Disstep1,y1),B’(x2-Diststep1,y2),C’(x3-Diststep1,y3),D’(x4-Diststep1,y4)。
如果移动方向为右移,将当前投影区域顶点的横坐标加上顶点移动距离,即目标投影坐标为:
A’(x1+Disstep1,y1),B’(x2+Disstep1,y2),C’(x3+Disstep1,y3),D’(x4+Disstep1,y4)。
如果移动方式为上下移动,只需移动当前投影区域顶点的纵坐标,如果移动方向为上移,将当前投影区域顶点的纵坐标减去顶点移动距离,目标投影坐标为:
A’(x1,y1-Disstep2),B’(x2,y2-Disstep2),C’(x3,y3-Disstep2),D’(x4,y4-Disstep2)。
如果移动方向为下移,将当前投影区域顶点的纵坐标加上顶点移动距离,目标投影坐标为:
A’(x1,y1+Disstep2),B’(x2,y2+Disstep2),C’(x3,y3+Disstep2),D’(x4,y4+Disstep2)。
在一些实施例中,由于当前投影区域的顶点坐标是根据当前出光面的顶点坐标计算的理想的实际投影区域,实际存在误差,计算得到的投影区域进行修正的话则会随着投影画面移动次数的增多,导致误差累计越来越大,最后呈现的效果则是投影画面严重变形。为了避免这个问题,因此每次在计算出投影区域后,需要去修正一下投影形状,防止投影画面移动后,下次移动引入误差传递。修正的方法则是在计算出的投影区域A,B,C,D内选择最大内接16:9矩形,然后再进行投影画面移动。
在一些实施例中,投影画面移动需在出光组件21所能投射的最大投影区域内移动,因此,根据画面移动指令,确定移动后的投影区域后,控制器23需判断移动后的投影区域是否在最大投影区域内,控制 器23还被配置为执行以下内容:
根据旋转矩阵与最大出光面的顶点坐标计算最大投影区域。
例如,出光组件21最大出光面的顶点坐标为amax(0,0),bmax(1920,0),cmax(1920,1080),dmax(0,1080)。根据最大出光面的顶点坐标,计算得到最大投影区域的顶点坐标Amax(x1max,y1max),Bmax(x2max,y2max),Cmax(x3max,y3max),Dmax(x4max,y4max),具体计算公式可参见当前投影区域的计算步骤,不做赘述。
如果最大投影区域包含目标投影坐标,执行将目标投影坐标转换至出光面的步骤。
如果最大投影区域不包含目标投影坐标,生成第一提示信息,以及控制出光组件21投射第一提示信息。
判断移动后的投影区域是否在最大投影区域内,需判断移动后的投影区域的顶点坐标,即目标投影坐标是否均在最大投影区域内。例如,如果目标投影坐标均在最大投影区域内,则移动后的投影区域位于最大投影区域内,控制器23可继续执行将目标投影坐标转换至出光面的步骤,如果存在不在最大投影区域内的目标投影坐标,则移动后的投影区域不完全位于最大投影区域内,即移动至最大投影区域以外,控制器23可生成用于提示用户移动后的投影区域到达边界的第一提示信息,并控制出光组件将第一提示信息投射至投影面,提示用户按照该画面移动指令移动后的投影画面超出所能投射的最大投影区域。
在一些实施例中,判断目标投影坐标是否在最大投影区域内,采用二维向量叉乘的原理:即向量a和向量b进行向量叉乘,若结果小于0,则表示向量b在向量a的顺时针方向;若结果大于0,则表示向量b在向量a的逆时针方向;若等于0,表示向量a与向量b共线。其中,顺逆时针是指两向量平移至起点相连,从某个方向旋转到另一个向量小于180度。
可以理解的是,如果移动后的投影区域的顶点在最大投影区域内,沿着最大投影区域走一圈,该顶点相对于行走路径始终保持相同方向,如果移动后的投影区域的顶点在最大投影区域外,沿着最大投影区域走一圈,该顶点相对于行走路径会有不同方向。
因此,基于上述原理,如图38所示,控制器23还被配置为执行以下内容:
S3801:根据最大投影区域的顶点坐标和目标投影坐标,确定边长向量和连接向量,边长向量为最大投影区域相邻两个顶点坐标间的向量,连接向量为每个边长向量的起点坐标与目标投影坐标间的向量;
S3802:将边长向量分别与连接向量进行向量叉乘,得到叉乘向量;
S3803:判断叉乘向量的值是否同号;如果叉乘向量的值同号或者为零执行S3804,如果叉乘向量的值不同号执行S3805;
S3804:确定最大投影区域包含目标投影坐标;
S3805:确定最大投影区域不包含目标投影坐标。
例如,如图39所示,依次判断移动后的投影区域四个顶点A’、B’、C’、D’是否在最大投影区域内,以移动后的投影区域的顶点坐标A’为例。按照顺时针选取最大投影区域四条边长的4个边长向量,分别为:AmaxBmax=(x2max-x1max,y2max-y1max);BmaxCmax=(x3max-x2max,y3max-y2max);CmaxDmax=(x4max-x3max,y4max-y3max);DmaxAmax=(x1max-x4max,y1max-y4max)。
根据移动方向计算最大投影区域的四个顶点与移动后的投影区域的顶点坐标A’的连接线对应的4个连接向量。
如果移动方向为左移,4个连接向量为:AmaxA’=(x1-Disstep1-x1max,y1-y1max);BmaxA’=(x1-Disstep1-x2max,y1-y2max);CmaxA’=(x1-Disstep1-x3max,y1-y3max);DmaxA’=(x1-Disstep1-x4max,y1-y4max)。如果移动方向为右移,4个连接向量为:AmaxA’=(x1+Disstep1-x1max,y1-y1max);BmaxA’=(x1+Disstep1-x2max,y1-y2max);CmaxA’=(x1+Disstep1-x3max,y1-y3max);DmaxA’=(x1+Disstep1-x4max,y1-y4max)。如果移动方向为上移,4个连接向量为:AmaxA’=(x1-x1max,y1-Disstep2-y1max);BmaxA’=(x1-x2max,y1-Disstep2-y2max);CmaxA’=(x1-x3max,y1-Disstep2-y3max);DmaxA’=(x1-x4max,y1-Disstep2-y4max)。如果移动方向为下移,4个连接向量为:AmaxA’=(x1-x1max,y1+Disstep2-y1max);BmaxA’=(x1-x2max,y1+Disstep2-y2max);CmaxA’=(x1-x3max,y1+Disstep2-y3max);DmaxA’=(x1-x4max,y1+Disstep2-y4max)。
将4个边长向量分别与4个连接向量进行向量叉乘,得到四个叉乘向量,如果叉乘向量小于0,则连接向量位于边长向量的顺时针方向,如果叉乘向量大于0,则连接向量位于边长向量的逆时针方向,如果叉乘向量等于0,则连接向量与边长向量共线。
依次判断四个叉乘向量的值。如果叉乘向量的值不同号,存在大于0的值,即存在连接向量位于边长向量的逆时针方向,则确定最大投影区域不包含移动后的投影区域的顶点坐标A’。如果四个叉乘向量的值同号或为0,均为小于0或等于0的值,即连接向量位于边长向量的顺时针方向或者连接向量与边长向量共线,则确定最大投影区域包含移动后的投影区域的顶点坐标A’。
例如,如图29所示,AmaxBmax×AmaxA’<0;BmaxCmax×BmaxA’<0;CmaxDmax×CmaxA’<0;DmaxAmax×DmaxA’<0。即4个连接向量均在4个边长向量的顺时针方向,移动后的投影区域的顶点A’在最大投影区域内。
同理,根据上述步骤,可确定移动后的投影区域其他三个顶点B’、C’、D’是否在最大投影区域内,如果移动后的投影区域的顶点A’,B’,C’,D’中存在至少一个顶点在最大投影区域外,则生成第一提示信息,提示用户移动到边。
在一些实施例中,控制器23可以按照逆时针选取最大投影区域四条边长的4个边长向量,分别为Amax Dmax,DmaxCmax,CmaxBmax,BmaxAmax。将4个边长向量分别与4个连接向量进行向量叉乘,得到四个叉乘向量。对应的,依次判断四个叉乘向量的值。如果叉乘向量的值不同号,存在小于0的值,即存在连接向量位于边长向量的顺时针方向,则确定最大投影区域不包含移动后的投影区域的顶点坐标A’,如果四个叉乘向量的值同号或为0,均为大于0或者等于0的值,即连接向量位于边长向量的逆时针方向或者连接向量与边长向量共线,则确定最大投影区域包含移动后的投影区域的顶点坐标A’。
在一些实施例中,依次判断移动后的投影区域的顶点坐标,即目标投影坐标是否均在最大投影区域内,若判断一个顶点位于最大投影区域之外后,直接生成第一提示信息,例如,控制器23依次判断移动后的投影区域的顶点A’、B’、C’、D’是否在最大投影区域内时,根据上述步骤,判断顶点A’是否位于最大投影区域内,若顶点A’位于最大投影区域内,继续判断顶点B’是否位于最大投影区域内,若顶点B’位于最大投影区域外,则直接生成第一提示信息。
S600:基于旋转矩阵,将目标投影坐标转换至出光面,得到出光投影坐标,以及控制出光组件按照出光投影坐标将投影内容投射至投影面。
如果移动后的投影区域的顶点坐标,即目标投影坐标均在最大投影区域内,控制器23可以将移动后的投影区域的顶点坐标,基于所获取的转换矩阵转换为出光组件坐标系下的出光投影坐标,即可得到出光组件投射根据画面移动指令移动后的投影画面所需的实际位置坐标值,也可以理解为出光组件按照出光投影坐标投射播放内容后即可完成无形变的移动投影画面。
例如,如图39所示,移动后的投影区域的顶点坐标为A’,B’,C’,D’均在最大投影区域内,基于投影面与出光面之间的旋转矩阵,将移动后的投影区域的顶点坐标转换至出光组件坐标系下,得到出光投影坐标。出光组件按照出光投影坐标将投影内容投射至投影面,如图40所示,即实现投影画面的移动。
基于同一发明构思,本申请实施例还提供一种投影画面处理方法,应用于投影设备,投影设备包括:出光组件以及控制器。如图41所示,投影画面处理方法包括:
S100:获取用户输入的画面移动指令;
其中,画面移动指令包括移动方向和移动距离。
S200:响应于画面移动指令,获取当前出光面的顶点坐标以及投影面与出光面之间的旋转矩阵。
S300:根据旋转矩阵与当前出光面的顶点坐标计算当前投影区域。
S400:根据当前投影区域的顶点坐标、移动方向和移动距离计算顶点移动距离。
S500:按照顶点移动距离和当前投影区域的顶点坐标计算目标投影坐标。
S600:基于旋转矩阵,将目标投影坐标转换至出光面,得到出光投影坐标,以及控制出光组件按照出光投影坐标将投影内容投射至投影面。
投影设备可以在接收到画面移动指令后,获取当前出光面的顶点坐标以及投影面与出光面之间的旋转矩阵。根据旋转矩阵与当前出光面的顶点坐标计算当前投影区域,根据当前投影区域的顶点坐标、移动方向和移动距离计算顶点移动距离,按照顶点移动距离和当前投影区域的顶点坐标计算目标投影坐标。基于旋转矩阵,将目标投影坐标转换至出光面,得到出光投影坐标,控制出光组件按照出光投影坐标将投影内容投射至投影面,解决投影设备在侧投下进行画面移动时投影画面产生形变的问题,提高用户体验。

Claims (30)

  1. 一种投影设备,包括:
    出光组件,被配置为投射投影内容至投影面;
    相机,被配置为拍摄采样图像;
    控制器,被配置为:
    响应于避障指令,获取变换矩阵,以及获取用户输入的调整指令,所述变换矩阵为所述相机与所述出光组件之间坐标的单应性矩阵;所述调整指令中包括用户指定的第一障碍目标;
    在采样图像中识别所述第一障碍目标,所述采样图像为所述相机在所述出光组件投射矫正图像时拍摄获得的图像;
    根据所述第一障碍目标在所述采样图像中划定可投影区域,所述可投影区域为所述采样图像中除所述第一障碍目标以外区域所容纳的最大预设宽高比的矩形区域;
    按照所述可投影区域和所述变换矩阵确定目标投影区域,以及控制所述出光组件将投影内容投射至所述目标投影区域。
  2. 根据权利要求1所述的投影设备,其中,所述控制器还被配置为:
    响应于避障指令,控制所述出光组件投射矫正图像,所述矫正图像包括纯色图卡和特征图卡;
    控制所述相机分别在所述出光组件投射纯色图卡和所述特征图卡时,对所述投影面上显示的矫正图像执行拍照,以获得第一采样图像和第二采样图像;
    在所述第一采样图像中识别所述第一障碍目标;
    根据所述第二采样图像中特征图卡的形状,建立所述变换矩阵。
  3. 根据权利要求2所述的投影设备,其中,所述控制器执行控制所述出光组件投射矫正图像,还被配置为:
    检测输入所述避障指令时刻,所述出光组件的投射状态;
    如果所述出光组件正在投射媒资画面,在待投射的媒资数据流中插入所述纯色图卡和所述特征图卡;
    根据所述纯色图卡和所述特征图卡在所述媒资数据流中的插入位置,计算拍摄时间;
    将所述拍摄时间发送给所述相机,以使所述相机按照所述拍摄时间拍摄获得所述第一采样图像和所述第二采样图像。
  4. 根据权利要求2所述的投影设备,其中,所述控制器执行获取变换矩阵,还被配置为:
    遍历所述第二采样图像中像素点的颜色值,以及根据颜色值在所述第二采样图像中识别特征点;
    提取所述第二采样图像对应特征图卡中的特征分布信息;
    根据识别到的所述特征点与所述特征分布信息,建立所述相机与所述出光组件之间坐标的单应性矩阵;
    存储所述单应性矩阵,以获取所述变换矩阵。
  5. 根据权利要求1所述的投影设备,其中,所述控制器执行在采样图像中识别所述第一障碍目标,还被配置为:
    根据用户指定的第一障碍目标的类型调用识别模型,所述识别模型为根据样本图像数据训练获得的神经网络模型;
    将所述采样图像输入所述识别模型;
    获取所述识别模型输出的识别结果,所述识别结果为所述采样图像包含所述第一障碍目标的分类概率;
    如果所述识别结果中包含所述第一障碍目标,执行根据所述第一障碍目标在所述采样图像中划定可投影区域的步骤;
    如果所述识别结果中不包含所述第一障碍目标,生成第一提示信息,以及控制所述出光组件投射所述第一提示信息。
  6. 根据权利要求1所述的投影设备,其中,所述调整指令中还包括用户指定的投影参数,所述控制器执行按照所述可投影区域和所述变换矩阵确定目标投影区域,还被配置为:
    根据所述变换矩阵,将所述可投影区域拟合为有效投影范围,所述有效投影范围为所述可投影区域映射在所述出光组件坐标系下的区域范围;
    从所述调整指令中提取所述投影参数;
    根据所述投影参数,在所述有效投影范围中划定所述目标投影区域。
  7. 根据权利要求6所述的投影设备,其中,所述投影参数包括指定画面尺寸,所述控制器执行根据所述投影参数,在所述有效投影范围中划定所述目标投影区域,还被配置为:
    获取所述有效投影范围的边界尺寸;
    如果所述边界尺寸大于或等于所述指定画面尺寸,根据所述指定画面尺寸在所述有效投影范围中划定所述目标投影区域;
    如果所述边界尺寸小于所述指定画面尺寸,生成第二提示信息,以及控制所述出光组件投射所述第二提示信息。
  8. 根据权利要求7所述的投影设备,其中,所述投影参数还包括指定间隔距离,所述控制器执行根据所述投影参数,在所述有效投影范围中划定所述目标投影区域,还被配置为:
    根据所述指定画面尺寸和所述指定间隔距离计算极值尺寸,所述极值尺寸包括最小宽度和最小高度,所述最小宽度为所述指定画面尺寸中指定画面宽度与所述指定间隔距离中指定横向距离的和;所述最小高度为所述指定画面尺寸中指定画面高度与所述指定间隔距离中指定纵向距离的和;
    获取所述有效投影范围的有效宽度和有效高度;
    如果所述有效宽度小于所述最小宽度,和/或,所述有效高度小于所述最小高度,生成第三提示信息,以及控制所述出光组件投射所述第三提示信息;
    如果所述有效宽度大于或等于所述最小宽度,并且,所述有效高度大于或等于所述最小高度,按照所述指定间隔距离和所述指定画面尺寸在所述有效投影范围内划定所述目标投影区域。
  9. 根据权利要求1所述的投影设备,其中,所述控制器执行根据所述第一障碍目标在所述采样图像中划定可投影区域,还被配置为:
    在所述可投影区域中识别第二障碍目标,所述第二障碍目标为所述采样图像中除所述第一障碍目标以外的目标;
    如果所述可投影区域中存在所述第二障碍目标,生成第四提示信息,和/或,生成问询指令;
    控制所述出光组件投射所述第四提示信息,和/或,输出所述问询指令;
    获取用户基于所述第四提示信息和/或所述问询指令输入的确认指令;
    响应于所述确认指令,根据所述第二障碍目标在所述采样图像中重新划定可投影区域。
  10. 一种投影画面处理方法,应用于投影设备,所述投影设备包括出光组件、相机以及控制器;所述投影画面处理方法包括:
    响应于避障指令,获取变换矩阵,以及获取用户输入的调整指令,所述变换矩阵为所述相机与所述出光组件之间坐标的单应性矩阵;所述调整指令中包括用户指定的第一障碍目标;
    在采样图像中识别所述第一障碍目标,所述采样图像为所述相机在所述出光组件投射矫正图像时拍摄获得的图像;
    根据所述第一障碍目标在所述采样图像中划定可投影区域,所述可投影区域为所述采样图像中除所述第一障碍目标以外区域所容纳的最大预设宽高比的矩形区域;
    按照所述可投影区域和所述变换矩阵确定目标投影区域,以及控制所述出光组件将投影内容投射至所述目标投影区域。
  11. 一种投影设备,包括:
    出光组件,被配置为投射投影内容至投影面;
    相机,被配置为拍摄采样图像;
    控制器,被配置为:
    响应于投影画面矫正指令,控制所述出光组件投射矫正图像,所述矫正图像包括纯色图卡和特征图卡;
    获取所述相机对所述纯色图卡拍摄获得的第一采样图像,以及对所述特征图卡拍摄获得的第二采样图像;
    根据所述第一采样图像确定特征轮廓区域,所述特征轮廓区域为所述第一采样图像中面积最大的轮廓区域或用户指定的轮廓区域;
    按照所述特征轮廓区域在所述第二采样图像中提取特征点;
    基于所述特征点计算投影面与所述出光组件之间的夹角,以及控制所述出光组件根据所述夹角投射 投影内容至投影面。
  12. 根据权利要求11所述的投影设备,其中,所述控制器执行根据所述第一采样图像确定特征轮廓区域,还被配置为:
    在所述第一采样图像中提取轮廓区域;
    检测避障开关的设置状态;
    如果所述设置状态为开启,遍历所述第一采样图像中各轮廓区域的面积,以筛选出预设数量个备选轮廓区域,所述备选轮廓区域用于供用户指定作为所述特征轮廓区域;
    如果所述设置状态为关闭,遍历所述第一采样图像中各轮廓区域的面积,以筛选出面积最大的轮廓区域作为所述特征轮廓区域。
  13. 根据权利要求12所述的投影设备,其中,所述控制器执行遍历所述第一采样图像中各轮廓区域的面积,以筛选出预设数量个备选轮廓区域后,还被配置为:
    计算所述第二采样图像中位于同一个所述备选轮廓区域内特征点的平均深度;
    计算所述备选轮廓区域相对于所述第二采样图像的面积比例;
    根据所述平均深度和所述面积比例生成第一提示信息,以及控制所述出光组件投射所述第一提示信息;
    获取用户基于所述第一提示信息输入的选中指令;
    响应于所述选中指令,将所述选中指令中指定的所述备选轮廓区域标记为所述特征轮廓区域。
  14. 根据权利要求13所述的投影设备,其中,所述控制器执行计算所述第二采样图像中位于同一个所述备选轮廓区域内特征点的平均深度,还被配置为:
    遍历所述备选轮廓区域中特征点的颜色值;
    根据多个特征点的颜色值,提取投影形状;
    获取所述相机和所述出光组件的硬件参数,以及获取所述第二采样图像中特征图卡上的标准形状;
    根据所述投影形状、所述标准形状以及所述相机的硬件参数,计算所述特征点与所述出光组件之间的距离;
    计算多个所述特征点与所述出光组件之间的距离的平均值,以获得所述平均深度。
  15. 根据权利要求12所述的投影设备,其中,所述控制器执行在所述第一采样图像中提取轮廓区域,还被配置为:
    遍历所述第一采样图像中的像素点颜色值;
    根据所述像素点颜色值识别边界图形,所述边界图形为与相邻像素点色差值大于或等于色差阈值的像素点组成的图形;
    按照所述边界图形和所述第一采样图像的边缘,划定所述轮廓区域。
  16. 根据权利要求11所述的投影设备,其中,所述控制器执行基于所述特征点计算投影面与所述出光组件之间的夹角,还被配置为:
    获取所述特征轮廓区域内的特征点在相机坐标系下的特征点坐标;
    根据所述相机和所述出光组件的硬件参数,将所述特征点坐标转化为在出光组件坐标系下的出光点坐标;
    根据多个所述出光点坐标拟合在出光组件坐标系下的投影面;
    计算所述投影面与所述出光组件对应出光面之间的夹角。
  17. 根据权利要求11所述的投影设备,其中,所述控制器执行控制所述出光组件根据所述夹角投射投影内容至投影面,还被配置为:
    获取所述出光组件的运行参数;
    根据所述运行参数和所述夹角,计算可投影区域;
    在所述可投影区域中划定目标投影区域,所述目标投影区域为所述可投影区域中的最大内接矩形区域,所述矩形区域具有预设宽高比;
    根据所述运行参数和所述夹角,将所述目标投影区域坐标逆变换至所述出光组件的出光面;
    控制所述出光组件,按照逆变换后的目标投影区域坐标投射投影内容。
  18. 根据权利要求17所述的投影设备,其中,所述控制器执行在所述可投影区域中划定目标投影区域,还被配置为:
    检测用户输入的用于启动避障功能的避障指令;
    如果检测到所述避障指令,则根据所述特征点划定所述目标投影区域;
    如果未检测到所述避障指令,则根据所述出光组件的出光面的顶点坐标划定所述目标投影区域。
  19. 根据权利要求11所述的投影设备,其中,所述控制器执行根据所述第一采样图像确定特征轮廓区域,还被配置为:
    将所述第一采样图像输入识别模型,所述识别模型是根据样本图像训练获得的神经网络模型;
    获取所述识别模型输出的识别结果,所述识别结果为所述第一采样图像包含障碍目标的分类概率;
    如果所述识别结果为包含障碍目标,从所述第一采样图像中剔除所述障碍目标对应的轮廓区域,以及在剔除所述障碍目标后的第一采样图像中确定特征轮廓区域;
    如果所述识别结果为不包含障碍目标,根据所述第一采样图像确定特征轮廓区域。
  20. 一种投影画面处理方法,应用于投影设备,所述投影设备包括出光组件、相机以及控制器;所述投影画面处理方法包括:
    响应于投影画面矫正指令,控制所述出光组件投射矫正图像,所述矫正图像包括纯色图卡和特征图卡;
    获取所述相机对所述纯色图卡拍摄获得的第一采样图像,以及对所述特征图卡拍摄获得的第二采样图像;
    根据所述第一采样图像确定特征轮廓区域,所述特征轮廓区域为所述第一采样图像中面积最大的轮廓区域或用户指定的轮廓区域;
    按照所述特征轮廓区域在所述第二采样图像中提取特征点;
    基于所述特征点计算投影面与所述出光组件之间的夹角,以及控制所述出光组件根据所述夹角投射投影内容至投影面。
  21. 一种投影设备,包括:
    出光组件,被配置为投射投影内容至投影面;
    控制器,被配置为:
    获取用户输入的画面移动指令,所述画面移动指令包括移动方向和移动距离;
    响应于所述画面移动指令,获取当前出光面的顶点坐标以及投影面与出光面之间的旋转矩阵;
    根据所述旋转矩阵与所述当前出光面的顶点坐标计算当前投影区域;
    根据所述当前投影区域的顶点坐标、所述移动方向和所述移动距离计算顶点移动距离;
    按照所述顶点移动距离和所述当前投影区域的顶点坐标计算目标投影坐标;
    基于所述旋转矩阵,将所述目标投影坐标转换至所述出光面,得到出光投影坐标,以及控制所述出光组件按照所述出光投影坐标将投影内容投射至投影面。
  22. 根据权利要求21所述的投影设备,还包括相机,所述相机被配置为拍摄投影内容图像;所述控制器还被配置为:
    获取所述相机拍摄的投影图像,并识别在相机坐标系下所述投影图像中的特征点坐标;
    根据所述出光组件和所述相机的硬件参数,将所述特征点坐标转换至出光组件坐标系;
    根据出光组件坐标系下的所述特征点坐标,拟合投影平面,以及计算所述投影平面与所述出光面的夹角;
    根据所述投影平面与所述出光面的夹角,构建所述旋转矩阵。
  23. 根据权利要求21所述的投影设备,其中,所述控制器执行根据所述当前投影区域的顶点坐标、所述移动方向和所述移动距离计算顶点移动距离,还被配置为:
    获取所述移动方向归属的移动方式;
    如果所述移动方式为左右移动,根据所述当前投影区域的顶点坐标计算投影宽度比,以及按照所述投影宽度比和所述移动距离计算所述顶点移动距离;
    如果所述移动方式为上下移动,根据所述当前投影区域的顶点坐标计算投影高度比,以及按照所述投影高度比和所述移动距离计算所述顶点移动距离。
  24. 根据权利要求23所述的投影设备,其中,所述控制器还被配置为:
    如果所述移动方式为左右移动,计算所述当前投影区域的第一横坐标差值和第二横坐标差值,所述第一横坐标差值为所述当前投影区域上边长两个顶点的横坐标之差,所述第二横坐标差值为所述当前投 影区域下边长两个顶点的横坐标之差;
    根据所述第一横坐标差值和所述第二横坐标差值计算投影宽度比,所述投影宽度比为所述第一横坐标差值和所述第二横坐标差值的均值;
    如果所述移动方式为上下移动,计算所述当前投影区域的第一纵坐标差值和第二纵坐标差值,所述第一纵坐标差值为所述当前投影区域左边长两个顶点的纵坐标之差,所述第二纵坐标差值为所述当前投影区域右边长两个顶点的纵坐标之差;
    根据所述第一纵坐标差值和所述第二纵坐标差值计算投影高度比,所述投影高度比为所述第一纵坐标差值和所述第二纵坐标差值的均值。
  25. 根据权利要求23所述的投影设备,其中,所述控制器执行按照所述顶点移动距离和所述当前投影区域的顶点坐标计算目标投影坐标,还被配置为:
    如果所述移动方式为左右移动,将所述当前投影区域顶点的横坐标按照所述移动方向移动所述顶点移动距离,以得到目标投影坐标;
    如果所述移动方式为上下移动,将所述当前投影区域顶点的纵坐标按照所述移动方向移动所述顶点移动距离,以得到目标投影坐标。
  26. 根据权利要求25所述的投影设备,其中,如果所述移动方式为左右移动,所述控制器还被配置为:
    获取投影设备的安装方式;
    如果所述安装方式为正投影,将所述当前投影区域顶点的横坐标按照所述移动方向移动所述顶点移动距离;
    如果所述安装方式为背投影,将所述当前投影区域顶点的横坐标按照所述移动方向的反向移动所述顶点移动距离。
  27. 根据权利要求21所述的投影设备,其中,所述控制器还被配置为:
    根据所述旋转矩阵与所述最大出光面的顶点坐标计算最大投影区域;
    如果所述最大投影区域包含所述目标投影坐标,执行将所述目标投影坐标转换至所述出光面的步骤;
    如果所述最大投影区域不包含所述目标投影坐标,生成第一提示信息,以及控制所述出光组件投射所述第一提示信息。
  28. 根据权利要求27所述的投影设备,其中,所述控制器还被配置为:
    根据所述最大投影区域的顶点坐标和所述目标投影坐标,确定边长向量和连接向量,所述边长向量为所述最大投影区域相邻两个顶点坐标间的向量,所述连接向量为每个边长向量的起点坐标与所述目标投影坐标间的向量;
    将所述边长向量分别与所述连接向量进行向量叉乘,得到叉乘向量;
    如果所述叉乘向量的值同号或者为零,则确定所述最大投影区域包含所述目标投影坐标;
    如果所述叉乘向量的值不同号,则确定所述最大投影区域不包含所述目标投影坐标。
  29. 根据权利要求21所述的投影设备,其中,所述控制器还被配置为:
    响应于所述画面移动指令,检测所述画面移动指令中的移动方向;
    如果所述画面移动指令中同时存在相反的移动方向,生成第二提示信息,以及控制所述出光组件投射所述第二提示信息。
  30. 一种投影画面处理方法,应用于投影设备,所述投影设备包括出光组件以及控制器;所述投影画面处理方法包括:
    获取用户输入的画面移动指令,所述画面移动指令包括移动方向和移动距离;
    响应于所述画面移动指令,获取当前出光面的顶点坐标以及投影面与出光面之间的旋转矩阵;
    根据所述旋转矩阵与所述当前出光面的顶点坐标计算当前投影区域;
    根据所述当前投影区域的顶点坐标、所述移动方向和所述移动距离计算顶点移动距离;
    按照所述顶点移动距离和所述当前投影区域的顶点坐标计算目标投影坐标;
    基于所述旋转矩阵,将所述目标投影坐标转换至所述出光面,得到出光投影坐标,以及控制所述出光组件按照所述出光投影坐标将投影内容投射至投影面。
PCT/CN2023/113259 2022-09-29 2023-08-16 投影设备及投影画面处理方法 WO2024066776A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202211203032.2A CN115604445A (zh) 2022-09-29 2022-09-29 一种投影设备及投影避障方法
CN202211195978.9A CN115623181A (zh) 2022-09-29 2022-09-29 一种投影设备及投影画面移动方法
CN202211195978.9 2022-09-29
CN202211212932.3A CN115883803A (zh) 2022-09-29 2022-09-29 投影设备及投影画面矫正方法
CN202211212932.3 2022-09-29
CN202211203032.2 2022-09-29

Publications (2)

Publication Number Publication Date
WO2024066776A1 WO2024066776A1 (zh) 2024-04-04
WO2024066776A9 true WO2024066776A9 (zh) 2024-05-10

Family

ID=90475984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/113259 WO2024066776A1 (zh) 2022-09-29 2023-08-16 投影设备及投影画面处理方法

Country Status (1)

Country Link
WO (1) WO2024066776A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118247776B (zh) * 2024-05-24 2024-07-23 南昌江铃集团胜维德赫华翔汽车镜有限公司 一种汽车盲区显示识别方法及系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019203002A1 (ja) * 2018-04-17 2019-10-24 ソニー株式会社 情報処理装置および方法
CN110336987B (zh) * 2019-04-03 2021-10-08 北京小鸟听听科技有限公司 一种投影仪畸变校正方法、装置和投影仪
CN112995625B (zh) * 2021-02-23 2022-10-11 峰米(北京)科技有限公司 用于投影仪的梯形校正方法及装置
CN114466173A (zh) * 2021-11-16 2022-05-10 海信视像科技股份有限公司 投影设备及自动投入幕布区域的投影显示控制方法
CN115604445A (zh) * 2022-09-29 2023-01-13 海信视像科技股份有限公司(Cn) 一种投影设备及投影避障方法
CN115623181A (zh) * 2022-09-29 2023-01-17 海信视像科技股份有限公司 一种投影设备及投影画面移动方法
CN115883803A (zh) * 2022-09-29 2023-03-31 海信视像科技股份有限公司 投影设备及投影画面矫正方法

Also Published As

Publication number Publication date
WO2024066776A1 (zh) 2024-04-04

Similar Documents

Publication Publication Date Title
CN115022606B (zh) 一种投影设备及避障投影方法
JP5612774B2 (ja) 追尾枠の初期位置設定装置およびその動作制御方法
WO2023087947A1 (zh) 一种投影设备和校正方法
WO2024066776A9 (zh) 投影设备及投影画面处理方法
US8786722B2 (en) Composition control device, imaging system, composition control method, and program
WO2024174721A1 (zh) 投影设备及调整投影画面尺寸的方法
CN115002432B (zh) 一种投影设备及避障投影方法
CN115883803A (zh) 投影设备及投影画面矫正方法
WO2023072030A1 (zh) 镜头自动对焦方法及装置、电子设备和计算机可读存储介质
US20210306604A1 (en) Projector controlling method, projector, and projection system
CN115002433A (zh) 投影设备及roi特征区域选取方法
US20240305754A1 (en) Projection device and obstacle avoidance projection method
CN115604445A (zh) 一种投影设备及投影避障方法
CN116055696A (zh) 一种投影设备及投影方法
CN114866751B (zh) 一种投影设备及触发校正方法
WO2024055793A1 (zh) 投影设备及投影画质调整方法
CN107430841B (zh) 信息处理设备、信息处理方法、程序以及图像显示系统
CN117097872A (zh) 一种投影设备自动梯形校正系统及方法
CN115623181A (zh) 一种投影设备及投影画面移动方法
WO2023087960A1 (zh) 投影设备及调焦方法
CN114760454A (zh) 一种投影设备及触发校正方法
CN114928728B (zh) 投影设备及异物检测方法
CN115474032B (zh) 投影交互方法、投影设备和存储介质
WO2023087951A1 (zh) 一种投影设备及投影图像的显示控制方法
CN114885142B (zh) 一种投影设备及调节投影亮度方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23870022

Country of ref document: EP

Kind code of ref document: A1