CN115174878B - Projection picture correction method, apparatus and storage medium - Google Patents

Projection picture correction method, apparatus and storage medium Download PDF

Info

Publication number
CN115174878B
CN115174878B CN202210840156.5A CN202210840156A CN115174878B CN 115174878 B CN115174878 B CN 115174878B CN 202210840156 A CN202210840156 A CN 202210840156A CN 115174878 B CN115174878 B CN 115174878B
Authority
CN
China
Prior art keywords
coordinate system
image
projection
vertex
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210840156.5A
Other languages
Chinese (zh)
Other versions
CN115174878A (en
Inventor
王豪庆
曹嘉航
梁翔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Formovie Chongqing Innovative Technology Co Ltd
Original Assignee
Formovie Chongqing Innovative Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Formovie Chongqing Innovative Technology Co Ltd filed Critical Formovie Chongqing Innovative Technology Co Ltd
Priority to CN202210840156.5A priority Critical patent/CN115174878B/en
Publication of CN115174878A publication Critical patent/CN115174878A/en
Application granted granted Critical
Publication of CN115174878B publication Critical patent/CN115174878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence

Abstract

The application relates to a projection picture correction method, a projection picture correction device, a computer device and a storage medium. The method comprises the following steps: acquiring an initial image shot by the mobile device for a projection curtain and gesture data when the mobile device shoots the initial image; a first projection picture which is scaled and projected by a projector according to a preset scaling rate is displayed in a projection curtain of the initial image; if the gesture data does not accord with the preset gesture data, controlling the initial image to rotate to obtain a target image; determining a first vertex coordinate of a projection picture in a target image under an image coordinate system and a second vertex coordinate of a projection curtain under the image coordinate system; performing coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system; the target vertex coordinates are sent to the projector so that the projector corrects to align the projected second projected screen with the projection screen. The method can improve the use flexibility of the projector.

Description

Projection picture correction method, apparatus and storage medium
Technical Field
The present invention relates to the field of projection technologies, and in particular, to a method and apparatus for correcting a projection image, and a storage medium.
Background
A projector is a device that projects images or video onto a projection screen and can be connected to a computer, a game machine, a memory, etc. through different interfaces to play a corresponding video signal. When the projector is used, the projection picture and the projection curtain are required to be aligned so as to achieve better watching effect.
With the continuous popularization of projectors, some projectors on the market at present need to correct images manually, and some projectors have an automatic image correction function, but the projectors themselves need to have hardware configuration capable of realizing automatic alignment, so that the projectors are not flexible to use.
Disclosure of Invention
In view of the above, it is desirable to provide a projection screen correction method, apparatus, and storage medium that can improve the flexibility of use.
In a first aspect, the present application provides a projection screen correction method. The method comprises the following steps:
acquiring an initial image shot by mobile equipment aiming at a projection curtain and gesture data when the mobile equipment shoots the initial image; a first projection picture of the original projection image scaled and projected by a projector at a preset scaling rate is displayed on a projection curtain in the initial image;
If the gesture data do not accord with the preset gesture data, controlling the initial image to rotate so as to obtain a target image;
determining a first vertex coordinate of the first projection picture in the target image under an image coordinate system and a second vertex coordinate of a projection curtain under the image coordinate system;
performing coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and the preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system;
the target vertex coordinates are sent to the projector, such that the projector corrects to align the projected second projected picture with the projection screen based on the target vertex coordinates.
In one embodiment, the gesture data is generated by a gesture sensor in the mobile device; if the gesture data does not match the preset gesture data, the controlling the rotation of the initial image to obtain the target image includes:
and if the gesture data does not accord with the preset gesture data, controlling the initial image to rotate by a multiple of 90 degrees so as to obtain a target image.
In one embodiment, the first projection screen is a solid color projection screen; the determining the first vertex coordinates of the first projection picture in the target image under the image coordinate system and the second vertex coordinates of the projection curtain under the image coordinate system comprises:
Performing contour edge detection on the first projection picture and the projection curtain in the target image to obtain a contour image; the contour image comprises a picture contour edge of the first projection picture and a curtain contour edge of the projection curtain;
and determining the vertex of the contour edge of the screen in the contour image to obtain a first vertex coordinate of the projection screen under an image coordinate system, and determining the vertex of the contour edge of the curtain in the contour image to obtain a second vertex coordinate of the projection curtain under the image coordinate system.
In one embodiment, the first vertex coordinates are a plurality of; the first vertex coordinates are coordinates of vertices at different orientations of the contour edge of the picture; the determining the vertex of the contour edge of the picture in the contour image to obtain the first vertex coordinate of the projection picture under the image coordinate system comprises:
when the first vertex coordinates at each azimuth are positioned, detecting pixel values along a first direction towards the azimuth by taking the central point of the contour image as a starting point until a first pixel point with an edge pixel value is detected and is used as a first reference boundary point of the contour edge of the picture; the first direction has two corresponding sub-directions in an image coordinate system;
For each sub-direction, if the pixel value of the adjacent pixel point of the first reference boundary point in the sub-direction is an edge pixel value, moving the position of the first reference boundary point towards the direction of the adjacent pixel point;
and stopping movement when the pixel values of the first reference boundary point at the adjacent pixel points in the two sub directions are non-edge pixel values, and taking the coordinates of the first reference boundary point in the image coordinate system when the movement is stopped as the first vertex coordinates at the azimuth.
In one embodiment, the second vertex coordinates are a plurality of; the second vertex coordinates are coordinates of a plurality of vertices at different orientations of the curtain contour edge;
the determining the vertex of the contour edge of the curtain in the contour image to obtain the second vertex coordinate of the projection curtain under the image coordinate system comprises the following steps:
when the second vertex coordinates at each azimuth are positioned, the vertex at the first vertex coordinates at the same azimuth of the contour edge of the picture is taken as a starting point, and the pixel value of the pixel point is detected along the second direction towards the azimuth until the first pixel point with the edge pixel value is detected and is taken as a second reference boundary point on the contour edge of the curtain; the second direction has two corresponding sub-directions in an image coordinate system;
For each sub-direction, if the pixel value of the adjacent pixel point of the second reference boundary point in the sub-direction is an edge pixel value, moving the position of the second reference boundary point towards the direction of the adjacent pixel point;
and stopping movement when the pixel values of the second reference boundary points in the two sub-directions are non-edge pixel values, and taking the coordinates of the second reference boundary points in the image coordinate system when the movement is stopped as second vertex coordinates at the azimuth.
In one embodiment, the target coordinate system is a projector coordinate system, and the performing coordinate conversion calculation based on the first vertex coordinate, the second vertex coordinate and the preset scaling factor to obtain the target vertex coordinate of the projection curtain under the target coordinate system includes:
carrying out coordinate relation calculation on the second vertex coordinates and the vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain to obtain the coordinate conversion relation between the image coordinate system and the plane coordinate system;
according to the coordinate conversion relation between the image coordinate system and the plane coordinate system, carrying out coordinate conversion calculation on the first vertex coordinate to obtain the vertex coordinate of the first projection picture under the plane coordinate system;
Carrying out non-scaling calculation on the vertex coordinates of the scaled picture based on the preset scaling rate to obtain the vertex coordinates of the original projection picture in the plane coordinate system;
performing coordinate conversion relation calculation based on the vertex coordinates of the original projection picture and the vertex coordinates of the original projection picture under a projector coordinate system to obtain a target coordinate conversion relation of the plane coordinate system and the projector coordinate system;
and carrying out coordinate conversion calculation on the vertex coordinates of the projection curtain under the plane coordinate system according to the target coordinate conversion relation to obtain target vertex coordinates of the projection curtain under the projector coordinate system.
In one embodiment, the target coordinate system is a projection curtain plane coordinate system; the performing coordinate conversion calculation based on the first vertex coordinate, the second vertex coordinate and the preset scaling factor, and obtaining the target vertex coordinate of the projection curtain in the target coordinate system includes:
obtaining a coordinate conversion relation between the image coordinate system and the projection curtain plane coordinate system according to the first vertex coordinate in the image coordinate system, the vertex coordinate of the original projection picture in the projection curtain plane coordinate system and the preset scaling factor;
And obtaining the target vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain according to the second vertex coordinates under the image coordinate system and the coordinate conversion relation between the image coordinate system and the plane coordinate system of the projection curtain.
In one embodiment, the projector and the projection screen are disposed on the same placement surface.
In a second aspect, the present application further provides a projection screen correction apparatus. The device comprises:
the image processing module is used for acquiring an initial image shot by the mobile device aiming at the projection curtain and gesture data when the mobile device shoots the initial image; a first projection picture preset by a projector is displayed in a projection curtain of the initial image; if the gesture data do not accord with the preset gesture data, controlling the initial image to rotate so as to obtain a target image;
the coordinate calculation module is used for determining a first vertex coordinate of the projection picture in the target image under an image coordinate system and a second vertex coordinate of the projection curtain under the image coordinate system; performing coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and the preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system;
And the correction module is used for sending the target vertex coordinates to the projector, so that the projector corrects the projected second projection picture and the projection curtain based on the target vertex coordinates.
In a third aspect, the present application also provides a computer-readable storage medium. The computer-readable storage medium has stored thereon a computer program that is executed by a processor to perform the steps of the projection screen correction method described above.
The projection picture correction method, the projection picture correction device and the storage medium are used for acquiring an initial image shot by mobile equipment for a projection curtain and gesture data when the mobile equipment shoots the initial image; and a first projection picture which is subjected to scaling projection by the projector is displayed in the projection curtain of the initial image. And if the gesture data does not accord with the preset gesture data, controlling the initial image to rotate so as to obtain a target image. Therefore, the shooting image shot in the preset shooting posture can be obtained without installing a camera on the projector, the hardware cost is reduced, shooting can be performed in any posture, and therefore complexity is reduced, and flexibility is improved. Determining a first vertex coordinate of the projection picture in the target image under an image coordinate system and a second vertex coordinate of a projection curtain under the image coordinate system; and carrying out coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system. Therefore, the processing procedure of obtaining the vertex coordinates is consistent because the target image accords with the preset shooting gesture, the complexity of coordinate correlation calculation is reduced, and the calculation efficiency is improved. The target vertex coordinates are sent to the projector, such that the projector corrects to align the projected second projected picture with the projection screen based on the target vertex coordinates. Thus, under the condition that the projector has no hardware requirement and no requirement on the shooting angle of the initial image, the automatic picture alignment is completed, thereby improving the flexibility.
Drawings
FIG. 1 is a diagram of an application environment of a projection screen correction method according to an embodiment;
FIG. 2 is a flow chart of a method for correcting a projection screen according to an embodiment;
FIG. 3 is an image diagram of a projection screen correction method according to an embodiment;
FIG. 4 is an image diagram of a projection screen correction method according to an embodiment;
FIG. 5 is a schematic diagram of a projection screen calibration method according to an embodiment;
FIG. 6 is a schematic diagram of a projection screen calibration method according to an embodiment;
FIG. 7 is a block diagram showing a configuration of a projection screen correcting apparatus according to an embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The projection picture correction method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein mobile device 110 communicates with server 120 and projector 130 over a network. The data storage system may store data that the server 120 needs to process. The data storage system may be integrated on the server 120 or may be located on a cloud or other network server. The mobile device 110 is a photographable device, and may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices, and the server 120 may be implemented by a stand-alone server or a server cluster formed by a plurality of servers.
In one embodiment, server 120 may also be replaced by a terminal, which is not limited in this regard.
The mobile device 110 shoots an initial image for a projection curtain, and the mobile device 110 sends the initial image and gesture data of the mobile device when shooting the initial image to the server 120. The server 120 acquires an initial image shot by the mobile device for a projection curtain and gesture data when the mobile device shoots the initial image; the first projection picture which is scaled and projected by the projector at a preset scaling rate is displayed in the projection curtain of the initial image.
The server 120 determines whether to rotate the initial image based on the gesture data when the mobile device shoots the initial image, and if the gesture data does not match the preset gesture data, controls the rotation of the initial image to obtain the target image. The server 120 determines a first vertex coordinate of the projection screen in the target image in the image coordinate system and a second vertex coordinate of the projection screen in the image coordinate system. The server 120 performs coordinate conversion calculation based on the first vertex coordinate, the second vertex coordinate and a preset scaling rate, so as to obtain a target vertex coordinate of the projection curtain under a target coordinate system. The server 120 transmits the target vertex coordinates to the projector 130 such that the projector 130 corrects to align the projected second projection screen with the projection curtain based on the target vertex coordinates.
In one embodiment, as shown in fig. 2, a projection screen correction method is provided, where the method is applied to a server for illustration, it is understood that the method may also be applied to a terminal, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
s202, acquiring an initial image shot by a mobile device for a projection curtain and gesture data when the mobile device shoots the initial image; a first projection picture which is scaled and projected by a projector according to a preset scaling rate is displayed in a projection curtain of the initial image; and if the gesture data does not accord with the preset gesture data, controlling the rotation of the initial image to obtain a target graph.
The mobile device is mobile computer equipment with shooting functions such as a mobile phone terminal and a tablet personal computer. The gesture data is generated by a gesture sensor of the mobile device, which may be any one of a gyro sensor, an acceleration sensor, a geomagnetic sensor, and the like.
Specifically, the projector zooms out a first projection screen into the projection curtain. After shooting the projection curtain, the mobile device transmits the gesture data during shooting and the initial image obtained through shooting to a server. The server acquires an initial image sent by the mobile device and gesture data when the mobile device shoots the initial image. The first projection picture which is scaled and projected by the projector according to the preset scaling rate is displayed in the projection curtain of the initial image. And if the gesture data does not accord with the preset gesture data, the server controls the initial image to rotate so as to obtain a target image.
In one embodiment, the gesture data of the mobile device when the mobile device captures the initial image is angle data acquired by a gesture sensor of the mobile device, and the server may rotate the target image based on the angle data.
In one embodiment, the preset scaling rate may take a value of 50%, 60%, 70%, or the like, so as to ensure that the first projection screen is within the projection curtain. It is understood that the preset scaling rate may be other values, which are not limited thereto.
In one embodiment, the server invokes an Opencv (a cross-platform computer vision and machine learning software library based on BSD license (open source) release) computer vision software library for picture preprocessing. Firstly converting an initial image into a gray level image, then calling a Gaussian filter function in Opencv to reduce noise of the initial image, obtaining a preprocessed initial image, and rotating the preprocessed initial image based on gesture data to obtain a target image conforming to a preset shooting direction.
S204, determining a first vertex coordinate of a projection picture in the target image under an image coordinate system and a second vertex coordinate of a projection curtain under the image coordinate system; and carrying out coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system.
The image coordinate system is a coordinate system corresponding to the target image. The first vertex coordinates are coordinates of vertices of the projection screen in the image coordinate system. The second vertex coordinates are coordinates of the vertices of the projection curtain in the image coordinate system. The target vertex coordinates are coordinates of the vertices of the projection curtain in the target coordinate system.
Specifically, the server establishes an image coordinate system of the target image, and determines a first vertex coordinate of a projection picture in the target image under the image coordinate system and a second vertex coordinate of the projection curtain under the image coordinate system. And the server performs coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system.
In one embodiment, the first projection screen is a solid color projection screen. The server may perform contour edge detection on the first projection screen and the projection screen of the solid color to obtain a contour image retaining the edges of the projection screen and the screen edges, and determine a first vertex coordinate and a second vertex coordinate based on the contour image.
In one embodiment, the first vertex coordinates are a plurality. The first vertex coordinates are coordinates of vertices at different orientations of the contour edge of the picture. The server may take the center point of the target image as the center point of the image coordinate system and determine, for each azimuth, the pixel values of the adjacent pixel points of the reference boundary point in the two directions in the image coordinate system. When the pixel values of the adjacent pixel points are all non-edge pixel values, the boundary point is referred to as a first vertex coordinate.
In one embodiment, the second vertex coordinates are a plurality. The plurality of second vertex coordinates are coordinates of vertices at different orientations of the curtain contour edge. The server may determine, for each direction, a pixel value of two sub-directional neighboring pixel points in the image coordinate system with respect to the boundary point by using the center point of the target image as the center point of the image coordinate system and using, for each direction, the vertex at the co-directional first vertex coordinate of the contour edge of the picture as the start point. When the pixel values of the adjacent pixel points are all non-edge pixel values, the boundary point is referred to as a second vertex coordinate.
In one embodiment, the server may determine the corresponding target vertex coordinates based on the type of projector. Wherein the projector type includes at least one of a tele projector and a short-focus projector. A tele projector may be used for projection at a greater distance; short-focus projectors may be used for closer-distance projection.
In one embodiment, after obtaining the target coordinate conversion relationship between the plane coordinate system and the projector coordinate system, the server may perform coordinate conversion calculation on the vertex coordinates of the projection curtain in the plane coordinate system according to the target coordinate conversion relationship, to obtain the target vertex coordinates of the projection curtain in the target coordinate system.
In one embodiment, the server may obtain the vertex coordinates of the scaled frame in the planar coordinate system according to the first coordinate conversion relationship after obtaining the first coordinate conversion relationship of the image coordinate system and the planar coordinate system. The server may obtain vertex coordinates of the original projected picture in the plane coordinate system based on a preset scaling rate. The server may obtain the target coordinate conversion relationship of the plane coordinate system and the projector coordinate system based on the vertex coordinates of the original projection screen and the vertex coordinates of the original projection screen in the projector coordinate system.
S206, sending the target vertex coordinates to the projector, so that the projector corrects the projected second projection screen and the projection curtain based on the target vertex coordinates.
Specifically, the server sends the target vertex coordinates to the projector. After receiving the target vertex coordinates, the projector corrects the projected second projection picture to align with the projection curtain based on the target vertex coordinates.
In one embodiment, the preset resolution includes a preset vertical resolution and a preset horizontal resolution, and the length and width of a rectangle formed by the vertex coordinates of the four original projection pictures in the projector coordinate system are respectively the preset horizontal resolution and the preset vertical resolution. Similarly, the length and width of the rectangle formed by the vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain are respectively the preset horizontal resolution and the preset vertical resolution. For example, the preset vertical resolution and the preset horizontal resolution are 1920 and 1080, respectively, and the vertex coordinates of the four original projection pictures in the projector coordinate system are (0, 0), (1920,0), (0, 1080), (1920, 1080), respectively. The four vertex coordinates of the projection curtain in the planar coordinate system are (0, 0), (1920,0), (0, 1080), (1920, 1080), respectively.
In one embodiment, the step of performing coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling factor to obtain target vertex coordinates of the projection curtain in a target coordinate system is performed based on a preset resolution; the server sends the target vertex coordinates to the projector, so that the projector converts the target vertex coordinates according to the conversion relation between the self resolution and the preset resolution, and corrects the projected second projection picture and the projection curtain based on the converted target vertex coordinates.
According to the projection picture correction method, initial images shot by the mobile equipment aiming at the projection curtain and gesture data when the mobile equipment shoots the initial images are obtained; the first projection picture which is scaled and projected by the projector is displayed in the projection curtain of the initial image. And if the gesture data does not accord with the preset gesture data, controlling the rotation of the initial image to obtain a target image. Therefore, the shooting image shot in the preset shooting posture can be obtained without installing a camera on the projector, the hardware cost is reduced, shooting can be performed in any posture, and therefore complexity is reduced, and flexibility is improved. Determining a first vertex coordinate of a projection picture in a target image under an image coordinate system and a second vertex coordinate of a projection curtain under the image coordinate system; and carrying out coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system. Therefore, the processing procedure of obtaining the vertex coordinates is consistent because the target image accords with the preset shooting gesture, the complexity of coordinate correlation calculation is reduced, and the calculation efficiency is improved. The target vertex coordinates are sent to the projector such that the projector corrects to align the projected second projection screen with the projection curtain based on the target vertex coordinates. Thus, under the condition that the projector has no hardware requirement and no requirement on the shooting angle of the initial image, the automatic picture alignment is completed, thereby improving the flexibility.
In one embodiment, the gesture data is collected by a gesture sensor in the mobile device; if the gesture data does not match the preset gesture data, controlling the rotation of the initial image to obtain the target image includes: and if the gesture data does not accord with the preset gesture data, controlling the initial image to rotate by a multiple of 90 degrees so as to obtain a target image.
Specifically, if the gesture data does not match the preset gesture data, the server may control the initial image to rotate by a multiple of 90 degrees to obtain the target image. For example, as shown in fig. 3, when the projection curtain is photographed by using the mobile device, a photographing posture of a horizontal screen with a camera to the right, a photographing posture of a horizontal screen with a camera to the left, or a vertical screen posture may be adopted for photographing, and initial images obtained by photographing are shown in fig. 3 (a), 3 (b), and 3 (c), respectively. Assuming that the preset photographing posture is the portrait posture, the initial images in fig. 3 (a) and 3 (b) are not photographed in conformity with the preset photographing posture. Therefore, it is necessary to rotate the initial image so as to obtain a target image conforming to a preset shooting attitude, i.e., an image shot in a vertical screen attitude as shown in fig. 3 (c).
It will be understood that, under different shooting postures, the positions of the vertices of the projection curtain and the projection screen in the initial image are inconsistent, for example, the posture of the mobile device is taken as a preset posture when the initial image in fig. 3 (c) is shot, the projection curtain is at a normal viewing angle of a viewer, at this time, the top left vertex of the projection curtain is located at the top left corner of the initial image, but when the mobile device is shot in other shooting postures, the initial image in fig. 3 (a) and fig. 3 (b) is obtained, the top left vertex of the projection curtain is not located at the top left corner of the initial image, and in order to obtain the correction indication information, the positions of the vertices of the projection curtain and the projection screen need to be identified, so that the relevant calculation for the vertex coordinates has a certain complexity, thereby also increasing the complexity of the calculation of the correction indication information.
In this embodiment, if the actual shooting attitude does not match the preset shooting attitude, the initial image is controlled to rotate to obtain the target image, and specifically, the initial image may be controlled to rotate by a multiple of 90 degrees to obtain the target image. For example, the initial image in fig. 3 (a) is rotated clockwise by 90 degrees to obtain a target image; the initial image in fig. 3 (b) is rotated counterclockwise by 90 degrees to obtain a target image. Through rotating the initial image into the target image, a foundation is laid for unified processing of subsequent vertex coordinate related computation, so that the processing process of obtaining the target vertex coordinates by utilizing the target image conforming to the preset shooting posture can be unified, different processing flows are not required to be executed for different actual shooting postures, the complexity of coordinate related computation is reduced, and the computation efficiency is improved. In addition, the camera can shoot in any gesture, and the server can automatically adjust and rotate the initial image to obtain a target image conforming to preset gesture data, so that the complexity is reduced, and the flexibility is improved.
In one embodiment, the first projection screen is a solid color projection screen; determining a first vertex coordinate of a first projection picture in the target image under an image coordinate system and a second vertex coordinate of a projection curtain under the image coordinate system comprises: performing contour edge detection on a pure-color projection picture and a projection curtain in a target image to obtain a contour image; the contour image comprises a picture contour edge of a projection picture and a curtain contour edge of a projection curtain; and determining the vertex of the contour edge of the picture in the contour image to obtain a first vertex coordinate of the projection picture under the image coordinate system, and determining the vertex of the contour edge of the curtain in the contour image to obtain a second vertex coordinate of the projection curtain under the image coordinate system.
Specifically, the first projection picture is a pure-color projection picture, and contour edge detection is performed on the pure-color projection picture and the projection curtain in the target image to obtain a contour image. It will be appreciated that since the projection screen is solid, the interior of the projection screen does not have any contours, and that after edge detection is performed, the contour image only retains the edges of the projection screen and the edges of the projection screen. As shown in fig. 4, the contour image includes a frame contour edge of the projection frame and a curtain contour edge of the projection curtain. Since the pixel values of the black part are 0 and the pixel values of the white part are 255, the server can determine the vertex of the contour edge of the picture in the contour image according to the pixel values of the pixel points to obtain the first vertex coordinate of the projection picture under the image coordinate system, and determine the vertex of the contour edge of the curtain in the contour image to obtain the second vertex coordinate of the projection curtain under the image coordinate system.
In this embodiment, since the first projection screen is a solid-color projection screen, after the contour edge detection is performed on the solid-color projection screen and the projection curtain in the target image, the image retaining the edge of the projection screen and the edge of the projection curtain is obtained, so that the related vertex coordinates can be determined, and the processing procedure is simplified.
In one embodiment, the first vertex coordinates are a plurality; the first vertex coordinates are coordinates of vertices at different orientations of the contour edge of the picture; determining the vertex of the contour edge of the picture in the contour image to obtain a first vertex coordinate of the projection picture in an image coordinate system comprises: when the first vertex coordinates at each azimuth are positioned, detecting pixel values along a first direction towards the azimuth by taking the central point of the contour graph as a starting point until a first pixel point with an edge pixel value is detected and is used as a first reference boundary point of the contour edge of the picture; the first direction has two corresponding sub-directions in an image coordinate system; for each sub-direction, if the pixel value of the adjacent pixel point of the first reference boundary point in the sub-direction is an edge pixel value, moving the position of the first reference boundary point towards the direction of the adjacent pixel point; and stopping movement when the pixel values of the adjacent pixel points of the first reference boundary point in the two sub directions are non-edge pixel values, and taking the coordinates of the first reference boundary point in the image coordinate system when the movement is stopped as the first vertex coordinates at the azimuth.
Wherein, the edge pixel value refers to the pixel value of the pixel point on the edge; the non-edge pixel value refers to a pixel value of a pixel point on a non-edge. The adjacent pixels in the division direction may be plural. A corresponding one of the first directions may be synthesized from the two sub-directions in the coordinate system. The first vertex coordinates are coordinates of vertices at different orientations of the contour edge of the picture, such as coordinates of vertices above left, above right, below left, and below right.
Specifically, there are a plurality of first vertex coordinates of the contour image. The first vertex coordinates are coordinates of vertices at different orientations of the contour edge of the picture. When the server locates the first vertex coordinates at each azimuth, the server detects pixel values along a first direction towards the azimuth by taking the central point of the contour image as a starting point until a first pixel point with an edge pixel value is detected and is used as a first reference boundary point of the contour edge of the picture. The first direction has two corresponding sub-directions on the image coordinate system. The server moves the position of the first reference boundary point towards the direction of the adjacent pixel point if the pixel value of the adjacent pixel point of the first reference boundary point in the sub direction is an edge pixel value aiming at each sub direction; and stopping movement when the pixel values of the adjacent pixel points of the first reference boundary point in the two sub directions are non-edge pixel values, and taking the coordinates of the first reference boundary point in the image coordinate system when the movement is stopped as the first vertex coordinates at the azimuth. For example, as shown in fig. 5, taking the upper left direction as an example, the server may perform pixel value detection along the direction toward the line corresponding to S501, starting from the center point, until the first pixel point with the edge pixel value is detected, and take the pixel point as the first reference boundary point of the frame contour edge. Wherein the first direction has two corresponding directions (left direction and upper direction) on the image coordinate system. The server moves the position of the first reference boundary point towards the direction of the adjacent pixel point if the pixel value of the adjacent pixel point of the first reference boundary point in the sub direction is an edge pixel value aiming at each sub direction; and stopping moving under the condition that the pixel values of the adjacent pixel points of the first reference boundary point in the two directions are non-edge pixel values, namely, the pixel values of the adjacent pixel points of the pixel point in the two directions are non-edge pixel values. And similarly operating the upper right, the lower right and the lower left, and finding out the first vertex coordinates of the upper right, the lower right and the lower left of the scaled projection picture under the image coordinate system.
In this embodiment, the server may take the center point of the target image as the center point of the image coordinate system, and determine, for each azimuth, the pixel values of the adjacent pixel points of the reference boundary point in the two directions in the image coordinate system. When the pixel values of the adjacent pixel points are all non-edge pixel values, the boundary point is referred to as a first vertex coordinate. In this way, accurate first vertex coordinates can be obtained.
In one embodiment, the second vertex coordinates are a plurality; the second vertex coordinates are coordinates of a plurality of vertices at different positions of the edge of the curtain contour; determining vertices of the curtain contour edge to obtain second vertex coordinates of the projection curtain in the image coordinate system comprises: when the second vertex coordinates at each azimuth are positioned, the vertex at the first vertex coordinates at the same azimuth of the contour edge of the picture is taken as a starting point, and the pixel value of the pixel point is detected along the second direction towards the azimuth until the first pixel point with the edge pixel value is detected and is taken as a second reference boundary point on the contour edge of the curtain; the second direction has two corresponding sub-directions on the image coordinate system; for each sub-direction, if the pixel value of the adjacent pixel point of the second reference boundary point in the sub-direction is an edge pixel value, moving the position of the second reference boundary point towards the direction of the adjacent pixel point; and stopping movement when the pixel values of the adjacent pixel points of the second reference boundary point in the two sub directions are non-edge pixel values, and taking the coordinates of the second reference boundary point in the image coordinate system when the movement is stopped as second vertex coordinates at the azimuth.
Wherein, a corresponding second direction can be synthesized by two sub-directions in the coordinate system. The plurality of second vertex coordinates are coordinates of a plurality of vertices at different orientations of the curtain contour edge, such as coordinates of vertices above left, above right, below left, and below right.
Specifically, the second vertex coordinates have a plurality of. The plurality of second vertex coordinates are coordinates of a plurality of vertices at different orientations of the curtain contour edge. And when the second vertex coordinates at each azimuth are positioned, the server detects the pixel value of the pixel point along the second direction towards the azimuth by taking the vertex at the first vertex coordinates at the same azimuth of the contour edge of the picture as a starting point until the first pixel point with the edge pixel value is detected and is used as a second reference boundary point on the contour edge of the curtain. The second direction has two corresponding sub-directions on the image coordinate system. The server moves the position of the second reference boundary point toward the direction of the adjacent pixel point if the pixel value of the adjacent pixel point of the second reference boundary point in the direction of the division is the edge pixel value for each division. And stopping movement when the pixel values of the adjacent pixel points of the second reference boundary point in the two sub directions are non-edge pixel values, and taking the coordinates of the second reference boundary point in the image coordinate system when the movement is stopped as second vertex coordinates at the azimuth. For example, as shown in fig. 5, the server specifically performs the steps of:
1. Starting from the center of the target image, obtaining a boundary point of an upper left picture;
2. since the non-edge pixel value (point pixel value within the picture) is 255 and the edge pixel value is 0, the server can obtain the first vertex coordinates of the top-left vertex of the picture.
3. And the server detects the pixel value of the pixel point along the second direction towards the azimuth by taking the vertex at the upper left side of the picture, namely the vertex at the co-azimuth first vertex coordinate of the picture contour edge, as a starting point until the first pixel point with the edge pixel value is detected, namely the upper left projection curtain boundary point.
4. The server takes the upper left projection curtain boundary point as a second reference boundary point on the edge of the curtain contour. The server moves the position of the second reference boundary point in the direction of the adjacent pixel point if the pixel value of the adjacent pixel point in the second reference boundary point in the direction of the second direction is the edge pixel value for each of the directions (left and upper directions). And stopping moving when the pixel values of the adjacent pixel points of the second reference boundary point in the two sub directions are non-edge pixel values, and taking the second reference boundary point when the movement is stopped as the top left projection curtain vertex.
5. And similarly operating the upper right, the lower left and the lower right, and finding out the first vertex coordinates of the upper right, the lower left and the lower right of the projection curtain under the image coordinate system.
In this embodiment, the server may determine the pixel values of two adjacent pixel points in the image coordinate system in the directions of two directions with reference to the boundary point, with the vertex at the co-directional first vertex coordinate of the contour edge of the screen as the starting point. When the pixel values of the adjacent pixel points are all non-edge pixel values, the boundary point is referred to as a second vertex coordinate. In this way, accurate second vertex coordinates can be obtained.
In one embodiment, the target coordinate system is a projector coordinate system, and performing coordinate conversion calculation based on the first vertex coordinate, the second vertex coordinate and a preset scaling factor, to obtain the target vertex coordinate of the projection curtain under the target coordinate system includes: carrying out coordinate relation calculation on the second vertex coordinates and the vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain to obtain a coordinate conversion relation of the image coordinate system and the plane coordinate system; according to the coordinate conversion relation of the image coordinate system and the plane coordinate system, carrying out coordinate conversion calculation on the first vertex coordinate to obtain the vertex coordinate of the scaled picture under the plane coordinate system; carrying out non-scaling calculation on the vertex coordinates of the scaled picture based on a preset scaling rate to obtain the vertex coordinates of the original projection picture in a plane coordinate system; performing coordinate conversion relation calculation based on vertex coordinates of the original projection picture and vertex coordinates of the original projection picture under a projector coordinate system to obtain a target coordinate conversion relation of a plane coordinate system and the projector coordinate system; and carrying out coordinate conversion calculation on the vertex coordinates of the projection curtain under the plane coordinate system according to the target coordinate conversion relation to obtain the target vertex coordinates of the projection curtain under the projector coordinate system.
Specifically, the projector may be an ultra-short focal projector, and the target coordinate system is a projector coordinate system. The server can calculate the coordinate relation between the second vertex coordinates and the vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain, so as to obtain the coordinate conversion relation between the image coordinate system and the plane coordinate system. The server can perform coordinate conversion calculation on the first vertex coordinates according to the coordinate conversion relation of the image coordinate system and the plane coordinate system to obtain vertex coordinates of the scaled picture under the plane coordinate system. And the server performs non-scaling calculation on the vertex coordinates of the scaled picture based on a preset scaling rate to obtain the vertex coordinates of the original projection picture in the plane coordinate system. And carrying out coordinate conversion relation calculation based on the vertex coordinates of the original projection picture and the vertex coordinates of the original projection picture under the projector coordinate system to obtain the target coordinate conversion relation of the plane coordinate system and the projector coordinate system. According to the target coordinate conversion relation, carrying out coordinate conversion calculation on the vertex coordinates of the projection curtain under the plane coordinate system to obtain the target vertex coordinates of the projection curtain under the projector coordinate system
In one embodiment, the specific steps of calculating the coordinate relationship between the second vertex coordinates and the vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain to obtain the coordinate conversion relationship between the image coordinate system and the plane coordinate system are as follows:
The perspective transformation getperspective transformation of Opencv is called to calculate a transformation matrix of the image coordinate system transformed into the plane coordinate system, that is, the coordinate transformation relation of the image coordinate system and the plane coordinate system. Perspective transformation calculation process description: the perspective transformation is to turn the projection into a new view plane, also called projection mapping, which is transformed by perspective transformation ABC as shown in fig. 6 to obtain a ' B ' C '.
The formula of perspective transformation is:
the transformed coordinates x, y are respectively: x=x '/w', y=y '/w';
wherein the method comprises the steps ofFor perspective transformation matrix, ">Represents a linear transformation, [ a ] 21 a 22 ]For translation. Namely, knowing the coordinate information of a plurality of points before and after the projection change under 2 different coordinate systems, the transformation matrix ∈under two different coordinate systems can be calculated through perspective change>
In one embodiment, the target coordinate transformation relationship of the planar coordinate system and the projector coordinate system and the coordinate transformation relationship of the image coordinate system and the planar coordinate system can be calculated by using the perspective transformation getperspective function of Opencv.
In one embodiment, the target coordinate system is a projection curtain plane coordinate system; performing coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling rate, and obtaining target vertex coordinates of the projection curtain under a target coordinate system comprises: obtaining a coordinate conversion relation between the image coordinate system and the projection curtain plane coordinate system according to the first vertex coordinate in the image coordinate system, the vertex coordinate of the original projection picture in the projection curtain plane coordinate system and the preset scaling rate; and obtaining the target vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain according to the coordinate conversion relation among the second vertex coordinates under the image coordinate system, the image coordinate system and the plane coordinate system of the projection curtain.
Specifically, if the projector is a tele projector, the target coordinate system may be a projection curtain plane coordinate system. The server can obtain a coordinate conversion relation between the image coordinate system and the projection curtain plane coordinate system according to the first vertex coordinate in the image coordinate system, the vertex coordinate of the original projection picture in the projection curtain plane coordinate system and the preset scaling rate. And the server obtains the target vertex coordinates of the projection curtain under the projection curtain plane coordinate system according to the coordinate conversion relation among the second vertex coordinates under the image coordinate system, the image coordinate system and the projection curtain plane coordinate system.
In one embodiment, the server may be a cloud server. The mobile device is a mobile phone, and Bluetooth is used for connection between the mobile phone and the projector. The projector can be an ultra-short focal projector and is arranged on one side of the edge of the projection curtain through a bracket; or the projector is a desktop projector and is directly arranged on one side of the edge of the projection curtain. After the projector is started by a user, the projector projects a solid-color picture on the projection curtain, and it can be understood that the solid-color picture can be a white background picture. The user controls the mobile phone camera to shoot a projection curtain containing a white background picture, and the angle data of the gyroscope during shooting and the initial image after shooting are transmitted to the cloud server. The cloud server determines a first vertex coordinate of a projection picture in a target image under an image coordinate system, determines a projection curtain coordinate under the image coordinate system, namely a second vertex coordinate, calculates the projection curtain vertex coordinate under a plane coordinate system, obtains a target vertex coordinate of the projection curtain under the target coordinate system, and sends the target vertex coordinate to the projector, so that the projector corrects the projected second projection picture based on the target vertex coordinate to align the projection curtain. In this embodiment, with a preset scaling factor of 60% and a preset resolution of 1920×1080 as the calculation parameters, the corresponding detailed processing steps are as follows:
a) And (3) opening the Bluetooth of the mobile phone, selecting equipment connection, and establishing communication connection between the mobile phone equipment with the gyroscope and the projector.
b) The application program or the applet first page is clicked to click a shooting screen to trigger the camera function of the mobile phone, so that the mobile phone can be used for shooting a projection curtain and a projection picture in a horizontal screen or a vertical screen and standing at any angle.
c) The projector projects a white background picture with the reduced size of 60%, the projection picture is required to be ensured to be in the projection curtain, and then the projection picture on the projection curtain is photographed.
d) And the terminal equipment (mobile phone) with the gyroscope transmits the angle data of the gyroscope to the cloud server.
e) And the mobile phone transmits the shot initial image to a cloud server, and the cloud server calls an Opencv computer vision software library to perform picture preprocessing. Converting the initial image into a gray level image, and then calling a Gaussian filter function in Opencv to reduce noise of the initial image to obtain a preprocessed initial image.
f) And judging the shooting direction of the mobile phone by using the angle data of the gyroscope, and then performing rotation operation on the initial image preprocessed in the previous step (namely, converting the horizontal screen shooting direction picture into the vertical screen shooting direction picture) to obtain a target image conforming to the preset shooting direction.
g) And (3) performing edge detection on the target image in the last step by using a Canny function (edge detection function) in Opencv to obtain a contour image as shown in fig. 4. (in FIG. 4, the pixel values of the black portions are 0 and the pixel values of the white portions are 255)
h) Calculating 4 first vertex coordinates of 60% projection picture and 4 second vertex coordinates of projection curtain under picture coordinate system
i) And (3) the coordinate system conversion matrix M1 is calculated by calling the perspective transformation getPerspolectTransform of Opencv from 4 first vertex coordinates of the projection curtain under the picture coordinate system to 4 vertex coordinates (0, 0), (1920,0), (0, 1080), (1920, 1080) of the projection curtain under the plane coordinate system, namely M1 is a conversion matrix for converting the picture coordinate system into the plane coordinate system, namely the coordinate conversion relation between the image coordinate system and the plane coordinate system.
j) And multiplying the first vertex coordinates of the 4 images of the 60% image in the image coordinate system by the conversion matrix M1 to obtain the vertex coordinates of the 4 images of the 60% image in the plane coordinate system, and then using the vertex coordinates of the 4 images of the 60% image in the plane coordinate system to calculate the vertex coordinates of the 4 images of the 100% image in the plane coordinate system.
k) Calculating a coordinate system conversion matrix M2 by calling perspective transformation getPerspolectTransform of Opencv from 4 vertex coordinates of a 100% picture under a plane coordinate system to 4 vertex coordinates (0, 0), (1920,0), (0, 1080), (1920, 1080) of a 100% picture under a projector coordinate system, wherein M2 is a conversion matrix for converting the plane coordinate system into the projector coordinate system, namely a target coordinate conversion relation between the plane coordinate system and the projector coordinate system; .
l) finally multiplying the projection curtain coordinates (0, 0), (1920,0), (0, 1080), (1920, 1080) in the plane coordinate system by M2 to obtain the projection curtain coordinates in the projector coordinate system, namely the target vertex coordinates.
m) transmitting the vertex coordinates of 4 targets of the projection curtain under the projector coordinate system to the mobile phone end by the cloud server, and transmitting the vertex coordinates to the projector equipment by the mobile phone end, wherein the projector equipment carries out four-point trapezoidal correction according to the vertex coordinates of the targets so that the picture is completely projected into the projection curtain range, and thus the automatic alignment function of the projection curtain can be completed.
In one embodiment, the projector and the projection curtain are both disposed on the same placement surface. In particular, the resting surface may be horizontal, such as a table top, a countertop, a floor, etc.; the placement surface may also be vertical, such as a wall surface.
In one embodiment, the projector and projection screen may be positioned adjacent to one another on a placement surface. In this way, the projection light source length of the projector can be greatly reduced.
It should be understood that, although the steps in the flowcharts in some embodiments of the present application are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in the flowcharts may include a plurality of steps or stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the steps or stages is not necessarily sequential, but may be performed in rotation or alternately with at least a portion of the steps or stages in other steps or other steps.
Based on the same inventive concept, the embodiment of the application also provides a projection screen correction device for realizing the above-mentioned projection screen correction method. The implementation of the solution provided by the apparatus is similar to that described in the above method, so the specific limitation in the embodiments of the projection screen correction apparatus provided below may be referred to the limitation of the projection screen correction method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 7, there is provided a projection screen correction apparatus 700 including: an image processing module 702, a coordinate calculation module 704, and a correction module 706, wherein:
an image processing module 702, configured to acquire an initial image captured by a mobile device for a projection curtain and pose data when the mobile device captures the initial image; a first projection picture preset by a projector is displayed in a projection curtain of the initial image; and if the gesture data does not accord with the preset gesture data, controlling the rotation of the initial image to obtain a target image.
A coordinate calculation module 704, configured to determine a first vertex coordinate of the projection screen in the target image under an image coordinate system and a second vertex coordinate of the projection curtain in the image coordinate system; performing coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system;
A correction module 706, configured to send the target vertex coordinates to the projector, so that the projector corrects the projected second projection screen to align with the projection curtain based on the target vertex coordinates.
In one embodiment, the gesture data is acquired by a gesture sensor in the mobile device; the image processing module 702 is further configured to control the initial image to rotate by a multiple of 90 degrees to obtain a target image if the pose data does not match the preset pose data.
In one embodiment, the first projection screen is a solid color projection screen; the coordinate calculation module 704 is further configured to perform contour edge detection on the solid-color projection screen and the projection curtain in the target image, so as to obtain a contour image; the contour image comprises a picture contour edge of the projection picture and a curtain contour edge of the projection curtain; and determining the vertex of the contour edge of the screen in the contour image to obtain a first vertex coordinate of the projection screen under an image coordinate system, and determining the vertex of the contour edge of the curtain in the contour image to obtain a second vertex coordinate of the projection curtain under the image coordinate system.
In one embodiment, the first vertex coordinates are a plurality of; the plurality of first vertex coordinates are coordinates of vertices at different orientations of the picture contour edge. The coordinate calculation module 704 is further configured to perform, when locating the first vertex coordinate at each azimuth, detection of pixel values along a first direction towards the azimuth with a center point of the contour image as a starting point until a first pixel point having an edge pixel value is detected, as a first reference boundary point of the contour edge of the picture; the first direction has two corresponding sub-directions on an image coordinate system; for each sub-direction, if the pixel value of the adjacent pixel point of the first reference boundary point in the sub-direction is an edge pixel value, moving the position of the first reference boundary point towards the direction of the adjacent pixel point; and stopping movement when the pixel values of the adjacent pixel points of the first reference boundary point in the two sub directions are non-edge pixel values, and taking the coordinates of the first reference boundary point in the image coordinate system when the movement is stopped as the first vertex coordinates at the azimuth.
In one embodiment, the second vertex coordinates are a plurality; the second vertex coordinates are coordinates of a plurality of vertices at different positions of the edge of the curtain contour; the coordinate calculation module 704 is further configured to detect a pixel value of a pixel point in a second direction towards the azimuth with a vertex at a first vertex coordinate of the same azimuth of the contour edge of the screen as a starting point when locating a second vertex coordinate of each azimuth until a first pixel point with an edge pixel value is detected, and the first pixel point is used as a second reference boundary point on the contour edge of the curtain; the second direction has two corresponding sub-directions on the image coordinate system; for each sub-direction, if the pixel value of the adjacent pixel point of the second reference boundary point in the sub-direction is an edge pixel value, moving the position of the second reference boundary point towards the direction of the adjacent pixel point; and stopping movement when the pixel values of the adjacent pixel points of the second reference boundary point in the two sub directions are non-edge pixel values, and taking the coordinates of the second reference boundary point in the image coordinate system when the movement is stopped as second vertex coordinates at the azimuth.
In one embodiment, the target coordinate system is a projector coordinate system; the coordinate calculating module 704 is further configured to perform coordinate relation calculation on the second vertex coordinates and vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain, so as to obtain a coordinate conversion relation of the image coordinate system and the plane coordinate system; according to the coordinate conversion relation of the image coordinate system and the plane coordinate system, carrying out coordinate conversion calculation on the first vertex coordinate to obtain the vertex coordinate of the scaled picture under the plane coordinate system; carrying out non-scaling calculation on the vertex coordinates of the scaled picture based on a preset scaling rate to obtain the vertex coordinates of the original projection picture in a plane coordinate system; performing coordinate conversion relation calculation based on vertex coordinates of the original projection picture and vertex coordinates of the original projection picture under a projector coordinate system to obtain a target coordinate conversion relation of a plane coordinate system and the projector coordinate system; and carrying out coordinate conversion calculation on the vertex coordinates of the projection curtain under the plane coordinate system according to the target coordinate conversion relation to obtain the target vertex coordinates of the projection curtain under the projector coordinate system.
In one embodiment, the target coordinate system is a projection curtain plane coordinate system; the coordinate calculation module 704 is further configured to obtain a coordinate conversion relationship between the image coordinate system and the projection curtain plane coordinate system according to the first vertex coordinate in the image coordinate system, the vertex coordinate of the original projection screen in the projection curtain plane coordinate system, and the preset scaling factor; and obtaining the target vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain according to the coordinate conversion relation among the second vertex coordinates under the image coordinate system, the image coordinate system and the plane coordinate system of the projection curtain.
The projection screen correction device acquires an initial image shot by the mobile device for a projection curtain and gesture data when the mobile device shoots the initial image; the first projection picture which is scaled and projected by the projector is displayed in the projection curtain of the initial image. And if the gesture data does not accord with the preset gesture data, controlling the rotation of the initial image to obtain a target image. Therefore, the shooting image shot in the preset shooting posture can be obtained without installing a camera on the projector, the hardware cost is reduced, shooting can be performed in any posture, and therefore complexity is reduced, and flexibility is improved. Determining a first vertex coordinate of a projection picture in a target image under an image coordinate system and a second vertex coordinate of a projection curtain under the image coordinate system; and carrying out coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and a preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system. Therefore, the processing procedure of obtaining the vertex coordinates is consistent because the target image accords with the preset shooting gesture, the complexity of coordinate correlation calculation is reduced, and the calculation efficiency is improved. The target vertex coordinates are sent to the projector such that the projector corrects to align the projected second projection screen with the projection curtain based on the target vertex coordinates. Thus, under the condition that the projector has no hardware requirement and no requirement on the shooting angle of the initial image, the automatic picture alignment is completed, thereby improving the flexibility.
For specific limitations of the above projection screen correction apparatus, reference may be made to the above limitations of the above projection screen correction method, and no further description is given here. The above-described respective modules in the projection screen correction apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a projection screen correction method.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. A projection screen correction method, the method comprising:
acquiring an initial image shot by mobile equipment aiming at a projection curtain and gesture data when the mobile equipment shoots the initial image; a first projection picture of the original projection image scaled and projected by a projector at a preset scaling rate is displayed on a projection curtain in the initial image;
If the gesture data do not accord with the preset gesture data, controlling the initial image to rotate so as to obtain a target image;
determining a first vertex coordinate of the first projection picture in the target image under an image coordinate system and a second vertex coordinate of a projection curtain under the image coordinate system;
performing coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and the preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system; if the projector is an ultra-short focal projector, the target coordinate system is a projector coordinate system; if the projector is a tele projector, the target coordinate system is a projection curtain plane coordinate system;
the target vertex coordinates are sent to the projector, such that the projector corrects to align the projected second projected picture with the projection screen based on the target vertex coordinates.
2. The method of claim 1, wherein the gesture data is generated by a gesture sensor in the mobile device; if the gesture data does not match the preset gesture data, the controlling the rotation of the initial image to obtain the target image includes:
If the gesture data do not accord with the preset gesture data, the initial image is controlled to rotate by a multiple of 90 degrees so as to obtain a target image; the target image accords with a preset shooting gesture.
3. The method of claim 1, wherein the first projected picture is a solid color projected picture; the determining the first vertex coordinates of the first projection picture in the target image under the image coordinate system and the second vertex coordinates of the projection curtain under the image coordinate system comprises:
performing contour edge detection on the first projection picture and the projection curtain in the target image to obtain a contour image; the contour image comprises a picture contour edge of the first projection picture and a curtain contour edge of the projection curtain;
and determining the vertex of the contour edge of the screen in the contour image to obtain a first vertex coordinate of the projection screen under an image coordinate system, and determining the vertex of the contour edge of the curtain in the contour image to obtain a second vertex coordinate of the projection curtain under the image coordinate system.
4. A method according to claim 3, wherein the first vertex coordinates are a plurality of; the first vertex coordinates are coordinates of vertices at different orientations of the contour edge of the picture; the determining the vertex of the contour edge of the picture in the contour image to obtain the first vertex coordinate of the projection picture under the image coordinate system comprises:
When the first vertex coordinates at each azimuth are positioned, detecting pixel values along a first direction towards the azimuth by taking the central point of the contour image as a starting point until a first pixel point with an edge pixel value is detected and is used as a first reference boundary point of the contour edge of the picture; the first direction has two corresponding sub-directions in an image coordinate system;
for each sub-direction, if the pixel value of the adjacent pixel point of the first reference boundary point in the sub-direction is an edge pixel value, moving the position of the first reference boundary point towards the direction of the adjacent pixel point;
and stopping movement when the pixel values of the first reference boundary point at the adjacent pixel points in the two sub directions are non-edge pixel values, and taking the coordinates of the first reference boundary point in the image coordinate system when the movement is stopped as the first vertex coordinates at the azimuth.
5. A method according to claim 3, wherein the second vertex coordinates are a plurality; the second vertex coordinates are coordinates of a plurality of vertices at different orientations of the curtain contour edge;
the determining the vertex of the contour edge of the curtain in the contour image to obtain the second vertex coordinate of the projection curtain under the image coordinate system comprises the following steps:
When the second vertex coordinates at each azimuth are positioned, the vertex at the first vertex coordinates at the same azimuth of the contour edge of the picture is taken as a starting point, and the pixel value of the pixel point is detected along the second direction towards the azimuth until the first pixel point with the edge pixel value is detected and is taken as a second reference boundary point on the contour edge of the curtain; the second direction has two corresponding sub-directions in an image coordinate system;
for each sub-direction, if the pixel value of the adjacent pixel point of the second reference boundary point in the sub-direction is an edge pixel value, moving the position of the second reference boundary point towards the direction of the adjacent pixel point;
and stopping movement when the pixel values of the second reference boundary points in the two sub-directions are non-edge pixel values, and taking the coordinates of the second reference boundary points in the image coordinate system when the movement is stopped as second vertex coordinates at the azimuth.
6. The method of claim 1, wherein when the target coordinate system is a projector coordinate system, the performing coordinate conversion calculation based on the first vertex coordinate, the second vertex coordinate, and the preset scaling factor, to obtain the target vertex coordinate of the projection curtain in the target coordinate system comprises:
Carrying out coordinate relation calculation on the second vertex coordinates and the vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain to obtain the coordinate conversion relation between the image coordinate system and the plane coordinate system;
according to the coordinate conversion relation between the image coordinate system and the plane coordinate system, carrying out coordinate conversion calculation on the first vertex coordinate to obtain the vertex coordinate of the first projection picture under the plane coordinate system;
carrying out non-scaling calculation on the vertex coordinates of the scaled picture based on the preset scaling rate to obtain the vertex coordinates of the original projection picture in the plane coordinate system;
performing coordinate conversion relation calculation based on the vertex coordinates of the original projection picture and the vertex coordinates of the original projection picture under a projector coordinate system to obtain a target coordinate conversion relation of the plane coordinate system and the projector coordinate system;
and carrying out coordinate conversion calculation on the vertex coordinates of the projection curtain under the plane coordinate system according to the target coordinate conversion relation to obtain target vertex coordinates of the projection curtain under the projector coordinate system.
7. The method of claim 1, wherein when the target coordinate system is a projection curtain plane coordinate system; the performing coordinate conversion calculation based on the first vertex coordinate, the second vertex coordinate and the preset scaling factor, and obtaining the target vertex coordinate of the projection curtain in the target coordinate system includes:
Obtaining a coordinate conversion relation between the image coordinate system and the projection curtain plane coordinate system according to the first vertex coordinate in the image coordinate system, the vertex coordinate of the original projection picture in the projection curtain plane coordinate system and the preset scaling factor;
and obtaining the target vertex coordinates of the projection curtain under the plane coordinate system of the projection curtain according to the second vertex coordinates under the image coordinate system and the coordinate conversion relation between the image coordinate system and the plane coordinate system of the projection curtain.
8. The method of claim 1, wherein the projector and the projection screen are disposed on a same placement surface.
9. A projection screen correction apparatus, the apparatus comprising:
the image processing module is used for acquiring an initial image shot by the mobile device aiming at the projection curtain and gesture data when the mobile device shoots the initial image; a first projection picture of the original projection image scaled and projected by a projector at a preset scaling rate is displayed on a projection curtain in the initial image; if the gesture data do not accord with the preset gesture data, controlling the initial image to rotate so as to obtain a target image;
The coordinate calculation module is used for determining a first vertex coordinate of the first projection picture in the target image under an image coordinate system and a second vertex coordinate of the projection curtain under the image coordinate system; performing coordinate conversion calculation based on the first vertex coordinates, the second vertex coordinates and the preset scaling rate to obtain target vertex coordinates of the projection curtain under a target coordinate system; if the projector is an ultra-short focal projector, the target coordinate system is a projector coordinate system; if the projector is a tele projector, the target coordinate system is a projection curtain plane coordinate system;
and the correction module is used for sending the target vertex coordinates to the projector, so that the projector corrects the projected second projection picture and the projection curtain based on the target vertex coordinates.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 8.
CN202210840156.5A 2022-07-18 2022-07-18 Projection picture correction method, apparatus and storage medium Active CN115174878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210840156.5A CN115174878B (en) 2022-07-18 2022-07-18 Projection picture correction method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210840156.5A CN115174878B (en) 2022-07-18 2022-07-18 Projection picture correction method, apparatus and storage medium

Publications (2)

Publication Number Publication Date
CN115174878A CN115174878A (en) 2022-10-11
CN115174878B true CN115174878B (en) 2024-03-15

Family

ID=83495791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210840156.5A Active CN115174878B (en) 2022-07-18 2022-07-18 Projection picture correction method, apparatus and storage medium

Country Status (1)

Country Link
CN (1) CN115174878B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11317103A (en) * 1998-03-06 1999-11-16 Osram Sylvania Inc Ac high luminance dischargelamp with magnetic deflection
JP2005092592A (en) * 2003-09-18 2005-04-07 Nec Viewtechnology Ltd Projector and projection system
JP2007181146A (en) * 2005-12-28 2007-07-12 Casio Comput Co Ltd Image projector, method and program for compensating projected image of image projector
CN101530744A (en) * 2008-03-13 2009-09-16 住友化学株式会社 Process for decomposing volatile aromatic compound
JP2010183219A (en) * 2009-02-04 2010-08-19 Seiko Epson Corp Method of measuring zoom ratio for projection optical system, projected image correction method using the zoom ratio measurement method, and projector executing the correction method
JP2011182078A (en) * 2010-02-26 2011-09-15 Seiko Epson Corp Correction information calculation device, image correction device, image display system, and correction information calculation method
CA2781289A1 (en) * 2011-08-03 2013-02-03 The Boeing Company Projection aided feature measurement using uncalibrated camera
JP2013192098A (en) * 2012-03-14 2013-09-26 Ricoh Co Ltd Projection system, projection method, program, and recording medium
JP2014071850A (en) * 2012-10-02 2014-04-21 Osaka Prefecture Univ Image processing apparatus, terminal device, image processing method, and program
CN105676572A (en) * 2016-04-19 2016-06-15 深圳市神州云海智能科技有限公司 Projection correction method and device for projector equipped on mobile robot
CN109782962A (en) * 2018-12-11 2019-05-21 中国科学院深圳先进技术研究院 A kind of projection interactive method, device, system and terminal device
CN110099266A (en) * 2019-05-14 2019-08-06 峰米(北京)科技有限公司 Projector's frame correction method, device and projector
CN114640833A (en) * 2022-03-11 2022-06-17 峰米(重庆)创新科技有限公司 Projection picture adjusting method and device, electronic equipment and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11317103A (en) * 1998-03-06 1999-11-16 Osram Sylvania Inc Ac high luminance dischargelamp with magnetic deflection
JP2005092592A (en) * 2003-09-18 2005-04-07 Nec Viewtechnology Ltd Projector and projection system
JP2007181146A (en) * 2005-12-28 2007-07-12 Casio Comput Co Ltd Image projector, method and program for compensating projected image of image projector
CN101530744A (en) * 2008-03-13 2009-09-16 住友化学株式会社 Process for decomposing volatile aromatic compound
JP2010183219A (en) * 2009-02-04 2010-08-19 Seiko Epson Corp Method of measuring zoom ratio for projection optical system, projected image correction method using the zoom ratio measurement method, and projector executing the correction method
JP2011182078A (en) * 2010-02-26 2011-09-15 Seiko Epson Corp Correction information calculation device, image correction device, image display system, and correction information calculation method
CA2781289A1 (en) * 2011-08-03 2013-02-03 The Boeing Company Projection aided feature measurement using uncalibrated camera
JP2013192098A (en) * 2012-03-14 2013-09-26 Ricoh Co Ltd Projection system, projection method, program, and recording medium
JP2014071850A (en) * 2012-10-02 2014-04-21 Osaka Prefecture Univ Image processing apparatus, terminal device, image processing method, and program
CN105676572A (en) * 2016-04-19 2016-06-15 深圳市神州云海智能科技有限公司 Projection correction method and device for projector equipped on mobile robot
CN109782962A (en) * 2018-12-11 2019-05-21 中国科学院深圳先进技术研究院 A kind of projection interactive method, device, system and terminal device
CN110099266A (en) * 2019-05-14 2019-08-06 峰米(北京)科技有限公司 Projector's frame correction method, device and projector
CN114640833A (en) * 2022-03-11 2022-06-17 峰米(重庆)创新科技有限公司 Projection picture adjusting method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115174878A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
US8971666B2 (en) Fisheye correction with perspective distortion reduction method and related image processor
CN106934777B (en) Scanning image acquisition method and device
KR100796849B1 (en) Method for photographing panorama mosaics picture in mobile device
US7899270B2 (en) Method and apparatus for providing panoramic view with geometric correction
JP2022528659A (en) Projector keystone correction methods, devices, systems and readable storage media
CN112689135A (en) Projection correction method, projection correction device, storage medium and electronic equipment
US11282232B2 (en) Camera calibration using depth data
US9838614B1 (en) Multi-camera image data generation
WO2018040180A1 (en) Photographing method and apparatus
JPH11261868A (en) Fisheye lens camera device and image distortion correction method and image extraction method thereof
CN111083456A (en) Projection correction method, projection correction device, projector and readable storage medium
CN105516597A (en) Method and device for processing panoramic photography
CN114663618A (en) Three-dimensional reconstruction and correction method, device, equipment and storage medium
CN111292278B (en) Image fusion method and device, storage medium and terminal
CN109690568A (en) A kind of processing method and mobile device
CN109690611B (en) Image correction method and device
CN114640833A (en) Projection picture adjusting method and device, electronic equipment and storage medium
CN114268777A (en) Starting method of laser projection equipment and laser projection system
CN115174878B (en) Projection picture correction method, apparatus and storage medium
WO2015198478A1 (en) Image distortion correction apparatus, information processing apparatus and image distortion correction method
CN115174879B (en) Projection screen correction method, apparatus, computer device and storage medium
CN115278184B (en) Projection picture correction method and device
CN115086625A (en) Correction method, device and system of projection picture, correction equipment and projection equipment
CN113747011A (en) Auxiliary shooting method and device, electronic equipment and medium
JP2018032991A (en) Image display unit, image display method and computer program for image display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant