CN117413511A - Electronic device and control method thereof - Google Patents

Electronic device and control method thereof Download PDF

Info

Publication number
CN117413511A
CN117413511A CN202280039016.6A CN202280039016A CN117413511A CN 117413511 A CN117413511 A CN 117413511A CN 202280039016 A CN202280039016 A CN 202280039016A CN 117413511 A CN117413511 A CN 117413511A
Authority
CN
China
Prior art keywords
information
image
external device
acquired
corrected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280039016.6A
Other languages
Chinese (zh)
Inventor
金志晚
朴在成
郑圣运
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020210136104A external-priority patent/KR20220162595A/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority claimed from PCT/KR2022/002569 external-priority patent/WO2022255594A1/en
Publication of CN117413511A publication Critical patent/CN117413511A/en
Pending legal-status Critical Current

Links

Abstract

An electronic device is disclosed. The electronic device includes: an image projection unit and a processor. A processor: controlling an image projection unit to project a text image including a plurality of marks on a projection surface; obtaining third information indicating positions of vertices of the test image in the captured image based on the first information indicating positions of the plurality of markers in the test image and the second information indicating positions of the plurality of markers in the image of the projection surface captured by the external device; correcting the third information on the basis of the posture information of the external device; and performing keystone correction based on the corrected third information.

Description

Electronic device and control method thereof
Technical Field
The present disclosure relates to an electronic device and a control method thereof. For example, the present disclosure relates to an electronic device for projecting an image and a control method thereof.
Background
With the development of electronic and optical technologies, various projectors have been used. A projector refers to an electronic device that projects light onto a projection plane (or projection screen) to form an image on the projection plane.
When an image is projected using a projector, a rectangular image is displayed on a projection plane when the projector is placed on a flat plane in the direction of the projection plane. Otherwise, up-down and/or left-right distortion may occur, or a rotated image may appear on the projection plane. Such distortion may be referred to as keystone effect (keystone effect).
Conventionally, a projector photographs a projection plane using an image sensor, and calculates and uses an inclination angle between the projection plane and the projector by processing the photographed image. In this case, correction processing for aligning the projection plane of the projector with the measurement axis of the image sensor should be performed, but if an error occurs during the correction processing, calculation of the angle may be greatly affected. In addition, the process of processing the image captured by the image sensor also requires a large amount of computation. In addition, when the keystone correction image is projected onto the projection plane, the size of the image may vary according to the distance between the projection plane and the projector. Conventionally, there is also a problem that it is difficult to adjust a keystone correction image to the size of a projection plane.
Disclosure of Invention
Embodiments of the present disclosure provide an electronic apparatus configured to perform accurate keystone correction by reflecting pose (post) information of an external device when keystone correction is performed based on an image captured by the external device.
Technical solution
According to an exemplary embodiment of the present disclosure, an electronic device includes: an image projector and a processor configured to: the image projector is controlled to project a test image including a plurality of marks onto a projection plane based on first information indicating positions of the plurality of marks in the test image and second information indicating positions of the plurality of marks in an image of the projection plane captured by the external device, acquire third information indicating positions of vertices of the test image in the captured image, correct the third information based on pose information of the external device, and perform keystone correction based on the corrected third information.
The processor may be configured to correct the third information corrected to rotate based on the posture information of the external device and based on the distance information between the external device and the projection plane, correct the corrected third information corrected to rotate to project, and acquire the corrected third coordinate information.
The pose information of the external device may include at least one of roll information, pitch information, or yaw information, wherein the roll information and the pitch information are acquired by an acceleration sensor provided in the external device, and wherein the yaw information is configured to be acquired based on view angle information of a camera for photographing a projection plane in the external device.
The processor may acquire at least one of roll information, pitch information, or yaw information by changing a reference value of a gravitational direction in an output value of the acceleration sensor based on the projection plane being recognized as a predetermined (e.g., designated) area based on gesture information of the external device, and correct the third coordinate information based on the acquired information.
Each of the plurality of marks may be in the form of a pattern including a black area and a white area having a predetermined (e.g., specified) ratio in each of the plurality of directions.
Each of the plurality of markers is configured to be positioned in the interior region by a predetermined (e.g., specified) ratio based on four vertices of the test image, wherein the processor is configured to: fourth information indicating the vertex position of the test image is acquired based on the first information and a predetermined (e.g., specified) ratio, and third information is acquired from the photographed image based on the fourth information and the first transformation matrix, wherein the first transformation matrix is acquired based on a mapping relationship between the first information and the second information.
The processor may be configured to: a second transformation matrix is acquired based on the corrected third information and vertex coordinate information of the test image, and keystone correction is performed by identifying a rectangular region having a maximum size corresponding to an aspect ratio (aspect ratio) of the input image within the identified region based on the corrected third information.
The processor may be configured to: fifth information corresponding to the identified rectangular region is acquired, and keystone correction is performed by applying an inverse matrix of the second transformation matrix to the acquired fifth coordinate information.
The processor may be configured to: the rectangle is expanded at a center point diagonally meeting the vertices of the first region acquired based on the corrected third information, and whether the vertices of the expanded rectangle meet the edges of the first region is identified based on the vertices of the rectangle meeting the edges of the first region, the smaller edges of the rectangle are expanded in predetermined (e.g., specified) pixel units, and the larger edges of the rectangle are expanded to correspond to the aspect ratio, and a rectangular region having the largest size at a position where the vertices corresponding to the diagonals of the rectangle meet the edges of the first region is identified.
The apparatus may further comprise a communication interface comprising communication circuitry, wherein the processor is configured to: second information indicating positions of the plurality of marks is received from the external device through the communication interface, and the second information indicating the positions of the plurality of marks is acquired based on the photographed image received from the external device through the communication interface.
The plurality of markers may be located in a plurality of different regions in the test image for correcting distortion of the projection plane, and may have different shapes for identifying a position corresponding to each of the plurality of markers.
According to an example embodiment of the present disclosure, a method for controlling an electronic device includes: based on first information indicating positions of the plurality of marks in the test image and second information indicating positions of the plurality of marks in an image of a projection plane captured by the external device, the test image including the plurality of marks is projected onto the projection plane, third information indicating positions of vertices of the test image in the captured image is acquired, the third information is corrected based on posture information of the external device, and keystone correction is performed based on the corrected third information.
Correcting the third information may include: correcting rotation of the third information based on posture information of the external device and based on distance information between the external device and the projection plane, correcting the third information corrected to rotate to project, and acquiring corrected third coordinate information.
The pose information of the external device includes at least one of roll information, pitch information, or yaw information, wherein the roll information and the pitch information are acquired by an acceleration sensor provided in the external device, and wherein the yaw information is acquired based on angle-of-view information of a camera for photographing a projection plane in the external device.
Performing keystone correction may include: at least one of roll information, pitch information, or yaw information is acquired by changing a reference value of a gravitational direction in an output value of an acceleration sensor based on a projection plane being recognized as a predetermined (e.g., specified) area based on gesture information of an external device, and third coordinate information is corrected based on the acquired information.
Each of the plurality of marks may be in the form of a pattern including a black area and a white area having a predetermined (e.g., specified) ratio in each of the plurality of directions.
Each of the plurality of markers may be positioned in the interior region by a predetermined (specified) ratio based on four vertices of the test image, wherein the method further comprises: fourth information indicating the vertex position of the test image is acquired based on the first information and a predetermined ratio, and third information is acquired from the photographed image based on the fourth information and the first transformation matrix, wherein the first transformation matrix is acquired based on a mapping relationship between the first information and the second information.
Performing keystone correction may include: the second transformation matrix is acquired based on the corrected third information and the vertex coordinate information of the test image, and the keystone correction is performed by identifying a rectangular region having a maximum size corresponding to the aspect ratio of the input image within the identified region based on the corrected third information.
Performing keystone correction may include: fifth information corresponding to the identified rectangular region is acquired, and keystone correction is performed by applying an inverse matrix of the second transformation matrix to the acquired fifth coordinate information.
Performing keystone correction may include: extending the rectangle at a center point diagonally meeting the vertex of the first region acquired based on the corrected third information, and identifying whether the vertex of the extended rectangle meets the edge of the first region, extending a smaller side of the rectangle in a predetermined (e.g., specified) pixel unit based on the vertex of the rectangle meeting the edge of the first region, and extending a larger side of the rectangle to correspond to the aspect ratio, and identifying a rectangular region having a maximum size at a position where the vertex corresponding to the diagonal of the rectangle meets the edge of the first region.
Acquiring the second information may include: the method includes receiving second information indicating positions of the plurality of marks from the external device through the communication interface, and acquiring the second information indicating the positions of the plurality of marks based on the photographed image received from the external device through the communication interface.
The plurality of markers may be located in a plurality of different regions in the test image for correcting distortion of the projection plane, and may have different shapes for identifying a position corresponding to each of the plurality of markers.
According to various exemplary embodiments, when keystone correction is performed based on an image captured by an external device, accurate keystone correction may be performed by reflecting pose information of the external device. Therefore, user convenience can be improved.
Drawings
The foregoing and other aspects, features, and advantages of certain embodiments of the present disclosure will become more apparent from the following detailed description, taken in conjunction with the accompanying drawings, in which:
fig. 1a and 1b are diagrams illustrating concepts of an example keystone correction method and coordinate system for better understanding;
fig. 2 is a block diagram showing an example configuration of a projector according to various embodiments;
3a, 3b, 3c, 3d, and 3e are diagrams illustrating examples of test images according to various embodiments;
Fig. 4a and 4b are diagrams illustrating example coordinate information according to various embodiments;
fig. 5 is a diagram illustrating exemplary third coordinate information according to various embodiments;
fig. 6a and 6b are diagrams illustrating an example acquisition method of roll information and pitch information according to various embodiments;
FIG. 7 is a diagram illustrating an example method of acquiring deflection information, in accordance with various embodiments;
fig. 8a and 8b are diagrams illustrating an example distance acquisition method between a user terminal and a projection plane according to various embodiments;
FIG. 9 is a diagram illustrating an example method for identifying a rectangular region of maximum size, in accordance with various embodiments;
FIG. 10 is a diagram illustrating an example projected image according to keystone correction in accordance with various embodiments;
fig. 11 is a block diagram illustrating an example configuration of an electronic device according to various embodiments;
fig. 12 is a block diagram illustrating an example configuration of a user terminal in accordance with various embodiments; and
fig. 13 is a flowchart illustrating an example method of controlling an electronic device, according to various embodiments.
Detailed Description
Hereinafter, the present disclosure will be described in more detail with reference to the accompanying drawings.
Terms used to describe various example embodiments will be briefly explained, and example embodiments will be described in more detail with reference to the accompanying drawings.
The terms used in the present disclosure are selected as general terms that are currently widely used in view of the configuration and functions of the present disclosure, but may be different according to the intention, precedent, appearance of new technologies, etc. of those skilled in the art. Furthermore, in certain cases, the terms may be arbitrarily selected. In this case, the meaning of the terms will be described in the description of the corresponding embodiments. Accordingly, the terms used in the specification should not necessarily be construed as simple names of the terms, but are defined based on meanings of the terms and the general contents of the present disclosure.
The terms "having," "can have," "including," and "can include," as used in example embodiments of the present disclosure, indicate the presence of corresponding features (e.g., elements such as numerical values, functions, operations, or portions), and do not exclude the presence of additional features.
The term "at least one of a or/and B" may refer to, for example, at least one a, including at least one B, or including both at least one a and at least one B.
Terms such as "first" and "second" as used in various example embodiments may be used to reference various elements regardless of order and/or importance of the respective elements and are not limiting of the respective elements.
When an element (e.g., a first element) is "operatively or communicatively coupled to" another element (e.g., a second element) or "connected to" another element, the element may be directly coupled to the other element or may be coupled to the other element (e.g., a third element).
In the description, the term "configured to" may be used interchangeably with, for example, "adapted to", "having the capabilities of … …", "designed to", "adapted to", "manufactured to" or "capable of" in some cases. The term "configured (set to)" does not necessarily mean "specially designed" at the hardware level.
The singular is intended to include the plural unless the context clearly indicates otherwise. The terms "comprises," "comprising," "including," "configured to" and the like in the specification are used to indicate the presence of features, numbers, steps, operations, elements, components, or combinations thereof, and they should not exclude the possibility of combining or adding one or more features, numerals, steps, operations, elements, components or combinations thereof.
In this disclosure, a "module" or "unit" may perform at least one function or operation, and may be implemented by hardware or software, or a combination of hardware and software. In addition, a plurality of "modules" or a plurality of "units" may be integrated into at least one module, and may be at least one processor other than "modules" or "units" that should be implemented in specific hardware.
Hereinafter, various example embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings.
Fig. 1a and 1b are diagrams illustrating concepts of an example keystone correction method and coordinate system for better understanding.
The electronic device 100 having a function of projecting an image, i.e., a projector function, shows a relatively accurate screen ratio in a case where a projector is positioned in line with a projection plane, but when the ratio is not satisfied due to a spatial condition, the electronic device 100 projects a screen deviated from the projection plane or a diamond-shaped (rhombus-shaped) screen distorted upward, downward, rightward, and leftward. In this case, keystone correction may be required. Keystone correction may refer to a function of adjusting a projection screen to be closer to the original shape of a rectangle, for example, by forcibly moving the sides of the screen to be displayed (i.e., to be projected).
According to an embodiment, keystone correction may be performed using a user terminal 200 as shown in fig. 1 a. For example, the projection plane 10 of the projection image may be photographed using a camera provided in the user terminal 200, and keystone correction may be performed based on the photographed image. Projective transforms may be used. Projective transformation may refer to transforming a projected image from 3D space to 2D space, for example. In other words, it is a method of transforming two images viewed from two different viewpoints in a 3D space. A matrix representing the relationship between two different images may be referred to as a homography matrix (hereinafter referred to as an H matrix). For example, the size of the H matrix may be 3×3. Four pairs of corresponding coordinates may be required to obtain the H matrix. According to an embodiment, the four corresponding pairs of coordinates may be coordinates on a world coordinate system.
Fig. 1b is a diagram showing an example concept of a coordinate system for better understanding.
As shown in fig. 1b, there are four coordinate systems in the image geometry, which are world, camera, normal (normal) and pixel coordinate systems. The world coordinate system and the camera coordinate system are three-dimensional coordinate systems, and the normal coordinate system and the pixel coordinate system are two-dimensional coordinate systems.
The world coordinate system is a coordinate system for representing the position of an object. The world coordinate system is a coordinate system that can be arbitrarily used, for example, an edge of a space may be set as an origin, a direction of one wall may be set as an X-axis, a direction of the other wall may be set as a Y-axis, and a direction facing the sky may be set as a Z-axis. Points on the world coordinate system may be denoted as P (X, Y, Z).
The camera coordinate system is a coordinate system relative to the camera. As shown in fig. 4a and 4b, for example, the camera coordinate system may set the focal point (center of the lens) of the camera as an origin, the front optical axis direction of the camera as a Z axis, the downward direction of the camera as a Y axis, and the right direction as an X axis. The points on the camera coordinate system may be denoted as Pc (Xc, yc, zc).
The pixel image coordinate system may be referred to as an image coordinate system. As shown in fig. 4b, the pixel coordinate system may be a coordinate system of an image viewed by an actual eye, and the upper left of the image may be set as an origin, the right direction is set as an x-axis increasing direction, and the downward direction is set as a y-axis increasing direction. The plane defined by the x-axis and the y-axis of the pixel coordinate system is referred to as the image plane.
Geometrically, a point p= (X, Y, Z) in 3D space may pass through the focal point of the camera (or the focal point of the lens) and be projected to a point P of the image plane img = (X, Y). Connection point P and point P img All 3D points on the ray of (c) can be projected as p img . Thus, it can be uniqueDetermining P from 3D point P img But conversely, without additional information, it may not be possible to select the pixel p from the image img P is obtained. The unit of the pixel coordinate system is a pixel, and can be expressed as p img =(x,y)。
The normalized (normalized) image coordinate system may be an image coordinate system that eliminates and/or reduces the effects of internal parameters of the camera. In addition, the normal coordinate system is a coordinate system in which units of the coordinate system are removed (normalized), and is a coordinate system of a virtual image plane defining a distance of 1 from the focal point of the camera. In other words, it may be an image plane shifted to a point at a distance of 1 from the focal point of the camera by translating the original image plane in parallel. The origin of the normal coordinate system is the midpoint of the image plane (intersecting the optical axis Zc). Points on the normal coordinate system may be represented as p' = (u, v). Even if the same scene is photographed at the same position and the same angle, different images can be acquired according to the camera or camera settings used. The normalized image plane may be used because it is more efficient to analyze and theoretically remove common geometric characteristics in the normalized image plane of these elements.
When keystone correction of an image projected by the electronic apparatus 100 is performed based on an image photographed using a camera provided in the user terminal 200, it may be difficult to perform accurate keystone correction when the posture of the user terminal 200 is incorrect.
Accordingly, various embodiments for performing accurate keystone correction based on pose information of the user terminal 200 and a predetermined (e.g., specified) pattern image are described below.
Fig. 2 is a block diagram illustrating an example configuration of an electronic device according to various embodiments.
According to fig. 2, the electronic device 100 may include an image projector 110 and at least one processor (e.g., including processing circuitry) 120. The electronic apparatus 100 may be implemented, for example, as a projector for projecting an image onto a wall or projection plane or various types of devices having an image projection function.
The image projector 110 may perform a function of outputting an image to a projection plane by projecting light representing the image to the outside. Here, the projection plane may be a part of the real world space of the output image or a separate projection plane. Image projector 110 may include various detailed configurations such as, for example, but not limited to, at least one light source of lamps, LEDs and lasers, projection lenses, reflectors, and the like.
The image projector 110 may project an image in one of various projection methods, such as a Cathode Ray Tube (CRT) method, a Liquid Crystal Display (LCD) method, a Digital Light Processing (DLP) method, a laser method, etc. The image projector 110 may include at least one light source.
The image projector 110 may be set to 4, for example, but not limited to, according to the purpose of the electronic device 100 or the user: 3 aspect ratio, 5:4 aspect ratio, 16:9 wide aspect ratio, and can output images at various resolutions, such as WVGA (854 x 480), SVGA (800 x 600), XGA (1024 x 768), WXGA (1280 x 720), WXGA (1280 x 800), XGA (1280 x 1024), UXGA (1600 x 1200), full HD (1920 x 1080), and the like.
In addition, the image projector 110 may perform various functions for adjusting the projected image under the control of the processor 120. For example, the image projector 110 may perform a zoom function, a lens shift function, and the like.
The at least one processor 120 (hereinafter referred to as a processor) may include various processing circuits and is electrically connected to the image projector 110 and controls the overall operation of the electronic device 100. Processor 120 may include one or more processors. For example, the processor 120 may perform operations of the display device 100 according to various embodiments of the present disclosure by executing at least one instruction stored in a memory (not shown).
According to an embodiment, the processor 120 may be implemented as, for example, but not limited to, a Digital Signal Processor (DSP), a microprocessor, a Graphics Processing Unit (GPU), an Artificial Intelligence (AI), a Neural Processing Unit (NPU), a timing controller (T-CON) that processes digital image signals, and the like. However, it is not limited thereto, and may include, for example, but is not limited to, one or more of a Central Processing Unit (CPU), a dedicated processor, a Micro Controller Unit (MCU), a Micro Processing Unit (MPU), a controller, an Application Processor (AP) or a Communication Processor (CP), an ARM processor, or may be defined in corresponding terms. In addition, the processor 140 may be implemented as a system on chip (SoC) with built-in processing algorithms, a Large Scale Integration (LSI), or may be implemented in the form of an Application Specific Integrated Circuit (ASIC) and a Field Programmable Gate Array (FPGA).
The processor 120 may control the image projector 110 such that each of the plurality of markers (or tags) projects a test image included in a different region to a projection plane. For example, the test image may be an image including only a plurality of marks. However, the test image may include other images in addition to the plurality of marks, but may include other images (e.g., background images) so as not to overlap with the positions including the plurality of marks.
According to an embodiment, each of the plurality of marks may be in the form of a pattern including a black area and a white area having a predetermined ratio in each of the plurality of directions.
According to an embodiment, the plurality of markers may be located at predefined locations, e.g., areas within a threshold distance from four vertices of the test image of the image. For example, the plurality of markers may be positioned in the interior region at a predetermined ratio based on the size of the entire image, which is based on the four vertices of the test image.
Fig. 3a, 3b, 3c, 3d, and 3e are diagrams illustrating example test images according to various embodiments.
Fig. 3a is a diagram illustrating an example shape of a mark (or label) according to an embodiment. The marks may be replaced with various terms such as labels, patterns, etc. The mark may be a predetermined (e.g., specified) ratio of the black area and the white area in a plurality of directions (e.g., left and right, up and down, and diagonal directions). For example, as shown, the ratio of white pixels to black pixels may be 1:1:3:1:1 in any direction of A, B and C. In this case, the mark can be identified in all 360 ° directions. However, the values are merely examples, and may be varied in various ways, such as 1:2:3:2:1 or 1:1:1:4:1:1. In other words, if the pattern is a predetermined (as used in the present disclosure, the term "predetermined" may be used interchangeably with the term "specified") pattern in the electronic apparatus 100 (e.g., the electronic device 100 or the user terminal 200) for projecting the test image including the mark and a means for identifying the position of the mark in the photographed image, the ratio may be changed in any form.
Fig. 3b is a diagram illustrating example locations of markers according to various embodiments.
The location of the marker in the test image may be a previously known location, which is at least one of the electronic device 100 for projecting the test image, the electronic device 100 for analyzing the photographed image, or the user terminal 200. However, the electronic device 100 may transmit the corresponding location information to another device, or the other device may receive the corresponding location information through an external server. For example, as shown in FIG. 3b, the plurality of markers may be located in the interior region of alpha% in the vertical and horizontal directions from the four vertices of the image.
Fig. 3c is a diagram illustrating example locations of markers according to various embodiments.
Depending on the implementation, when distortion is present on the projection plane, the distortion of the projection plane may not be identified or corrected by coordinates of four points. In this case, as shown in fig. 3c, a plurality of marks located in the respective areas may be used. However, in this case, each mark may need to have a different shape to identify to which position each mark corresponds. Thus, marks of various shapes may be used as position marks, as shown in fig. 3 c. As described above, when distortion of a projection plane is identified using a plurality of marks, image distortion caused by the distortion of the projection plane can be solved by adjusting the position of an image projected to a distortion region or scaling the size of the image.
Fig. 3d and 3e are diagrams illustrating example shapes of various marks according to various embodiments.
Thus, the mark can be designed similarly to a bar code, and can be converted into an on/off digital code by black and white at a specific point, and thus can be implemented to easily identify which position the mark corresponds to. According to an embodiment, the distance of the marks indicating the on/off or on/off positions as shown in fig. 3d and 3e may also be freely adjustable.
Referring back to fig. 2, the processor 120 may acquire first information indicating positions of a plurality of marks in the test image and second information indicating positions of a plurality of marks in an image of a projection plane (hereinafter referred to as a photographed image) photographed by an external device. The external device may be, for example, the user terminal 200 shown in fig. 1a, and the external device is hereinafter described as the user terminal 200.
According to an embodiment, the processor 120 may obtain first information indicating the positions of the plurality of markers in the original test image projected by the image projector 110.
In addition, the processor 120 may receive the second information from the user terminal 200 or acquire the second information based on the photographed image received from the user terminal 200. For example, the user terminal 200 may directly recognize and transmit second information indicating positions of a plurality of marks in the photographed image to the electronic device 100, and the processor 120 may directly acquire the second information from the received photographed image.
For convenience of description, coordinates of an original test image are described in a projector coordinate system, and coordinates in a photographed image are described in a camera coordinate system. Thus, the first information may correspond to a projector coordinate system and the second information may correspond to a camera coordinate system. For convenience of description, the first information and the second information are named as first coordinate information and second coordinate information.
Fig. 4a and 4b are diagrams illustrating example coordinate information according to various embodiments.
Fig. 4a is a diagram showing first coordinate information of a plurality of marks 411, 412, 413, and 414 in an original test image (e.g., projector coordinate system), and the first coordinate information may be P1, P2, P3, and P4. For example, the first coordinate information may be calculated based on a specific point (e.g., a center point) of the plurality of marks 411, 412, 413, and 414.
Fig. 4b is a diagram showing second coordinate information (i.e., camera coordinate system) of the plurality of marks 411, 412, 413, and 414 in the photographed image, and the second coordinate information may be C1, C2, C3, and C4. For example, the second coordinate information may be calculated based on a specific point (the same point as the first coordinate information) of the plurality of marks 411, 412, 413, and 414 (e.g., a center point).
Referring back to fig. 2, the processor 120 may acquire third information indicating the vertex region position of the test image in the photographed image based on the first coordinate information and the second coordinate information. Here, the vertex region may be four points where each side region meets. The third information may be coordinate information of a camera coordinate system, and may be named as third coordinate information for convenience of description.
According to an embodiment, the processor 120 may acquire fourth information indicating positions of vertices of the test image in the test image based on the first coordinate information and the position information of the markers, and acquire third coordinate information in the photographed image based on the first H matrix. The fourth information may be coordinate information of a projector coordinate system, and may be named fourth coordinate information for convenience of description.
In this case, the first H matrix may be acquired based on a mapping relationship between the first coordinate information and the second coordinate information. According to an embodiment, the processor 120 knows four coordinate pairs based on the first coordinate information P1, P2, P3, and P4 and the second coordinate information C1, C2, C3, and C4, so that the first H matrix can be acquired.
The processor 120 may transform the four vertex coordinates (e.g., fourth coordinate information) into a camera coordinate system (e.g., third coordinate information) using a first H-matrix of projector coordinates. For example, the processor 120 may store four vertex coordinate information, such as fourth coordinate information of a projector coordinate system, in the test image in advance, or calculate four vertex coordinates, such as fourth coordinate information of a projector coordinate system, based on the first coordinate information.
For example, each of the plurality of markers may be located in the internal region at a predetermined ratio based on four vertices of the test image. In this case, the processor 120 may acquire fourth coordinate information indicating the vertex position of the test image based on the first coordinate information indicating the position of the marker and a predetermined ratio. The fourth coordinate information may correspond to a projector coordinate system.
The processor 120 may acquire the third coordinate information by transforming the four vertex coordinates (fourth coordinate information) into a camera coordinate system in the projector coordinate system using the first H matrix. For example, as shown in fig. 5, the processor 120 may acquire third coordinate information C5, C6, C7, and C8 corresponding to four vertices 511, 512, 513, and 514 of a projection image in a photographed image, that is, third coordinate information of a camera coordinate system.
Referring back to fig. 2, even if the processor 120 acquires the third coordinate information indicating the vertex position of the test image in the photographed image, since it is difficult to consider that the user terminal 100 has performed photographing in a correct posture, correction of the third coordinate information may be required.
Accordingly, the processor 120 may correct the third coordinate information based on the gesture information of the user terminal 200. The gesture information may include at least one of roll information, pitch information, or yaw information. According to an embodiment, roll information and pitch information may be acquired by an acceleration sensor provided in the user terminal 200. The deflection information may be acquired based on view angle information of a camera for photographing a projection plane in the user terminal 200.
The processor 120 may correct the third coordinate information based on the distance information between the user terminal 200 and the projection plane and the posture information during correction. For example, the processor 120 may correct the third coordinate information to rotate based on the pose information and correct the third coordinate information to project based on the distance information.
Example rotation correction and projection correction methods are described below.
Fig. 6a and 6b are diagrams illustrating an example acquisition method of roll information and pitch information according to various embodiments.
According to an embodiment, if the Xc, yc, and Zc axes are defined based on the user terminal 200 as shown in fig. 6a, a roll angle rotated around the y axis may be defined as followsAnd a pitch angle θ that rotates about the x-axis.
"Eq.1"
"Eq.2"
In equation 1, AX, AY, AZ are X-axis, Y-axis, and Z-axis acceleration values of the acceleration sensor provided in the user terminal 200, respectively. For example, the pitch angle θ may be calculated based on the relationship shown in fig. 6 b.
Fig. 7 is a diagram illustrating an example method of acquiring deflection information, in accordance with various embodiments.
As described above, the attitude information related to the direction of gravity, for example, roll information and pitch information, may be acquired using the output value of the acceleration sensor (or gravity sensor), but the yaw information unrelated to the direction of gravity may be acquired using a geomagnetic sensor, a gyro sensor, or the like based on the direction arbitrarily specified by the user. However, when a gyro sensor or the like is not used, deflection information may be acquired based on angle-of-view information of the camera. For example, the processor 120 may acquire coordinates of a center point of a projection image in a camera coordinate system based on third coordinate information C5, C6, C7, and C8 corresponding to four vertices 511, 512, 513, and 514 of the projection image in the photographed image. The processor 120 may acquire a pixel distance value between the center point coordinates of the projected image and the center point coordinates of the photographed image. Thereafter, the processor 120 may be based on the full view: full pixel = camera rotation angle: the pixel distance value is used to obtain the camera rotation angle. For example, if the entire viewing angle is 80', the entire pixel is 4000px, and the pixel distance value is 500px, the camera rotation angle 10' may be obtained based on 80':4000px = camera rotation angle: 500 px.
According to an embodiment, if it is recognized that the projection plane is a predetermined area based on the posture information of the external device 200, the processor 120 may acquire at least one of roll information, pitch information, and yaw information by changing a reference value of a gravitational direction among output values of the acceleration sensor.
For example, even if the projection plane is a ceiling or wall plane other than a general wall plane in the same or similar direction as the direction of gravity, an image can be projected by rotating by 90 degrees. In this case, the processor 130 may acquire at least one of roll information, pitch information, or yaw information by changing a reference value in the gravitational direction among the output values of the acceleration sensor. For example, if the reference value in the gravitational direction in the output value of the acceleration sensor is an x-axis value based on the case where the projection plane is a general wall plane, when the projection plane is a ceiling, the reference value in the gravitational direction may be changed to a y-axis value or a z-axis value, and at least one of roll information, pitch information, or yaw information may be acquired. In this case, if the x-axis value of the gravity direction reference value is greater than the threshold value, the processor 130 may determine that the projection plane is a ceiling instead of a general wall plane, or when the image is projected by rotating 90 degrees even though it is a wall plane. Therefore, even if the projection plane is a ceiling instead of a general wall plane or wall plane, when an image is projected by rotating by 90 degrees, calculation errors due to posture information of the external device 200 can be prevented and/or reduced.
Referring back to fig. 2, according to an embodiment, the processor 120 may acquire distance information between the user terminal 200 and the projection plane.
According to an embodiment, the processor 120 may acquire distance information of a virtual plane in pixels on which the camera image is projected instead of an actual projection plane. Here, the virtual plane in units of pixels may be the pixel coordinate system described in fig. 1 b.
According to an embodiment, when the user terminal 200 is equipped with a distance sensor (e.g., a ToF sensor), and if the distance (z-axis value) of each vertex can be known from the user terminal 200, the real world distance of the z-axis can be calculated in px units, and the z-axis value can be scaled in px units. Since the distance between the x-axis pixel and the y-axis pixel can be identified by the photographed image, and the corresponding real-world distance can be identified based on the ToF sensor, the ratio between the px unit and the real-world distance can be calculated using the distance to calculate the z-axis as px.
According to another example, when the user terminal 200 does not have a distance sensor and the angle of view information of the camera is known, the distance information may be acquired based on the angle of view information of the lens (sensor). For example, the view angle information of the lens may be acquired from an exchangeable image file format (EXIF).
For example, the real world ratio of focal length and screen diagonal may be fixed according to the viewing angle as shown in fig. 8 a. The diagonal of the screen may be acquired based on the number of diagonal pixels, and the distance to the object may correspond to the focal length. In other words, if two points of the imaging object are 1000px in the xy plane, the diagonal of the screen is 2000px, and the ratio of the focal length to the diagonal of the screen is 2:1, then the distance value from the z-axis value of the camera for two points on the xy plane is 2: 1=x: 2000, and the distance value may be 4000px. In other words, the xy plane may be distant from the camera 4000px in the z axis.
According to another example, the viewing angle is not known at all, because there is no ToF sensor in the camera and no information such as focal length, errors can be taken into account, and the calculation can be performed by inputting about 75 degrees, 75 degrees being the field of view of a lens typically used in user terminals. According to an embodiment, information about the camera may be received through an external server. For example, a camera manufacturer or a keystone correction service provider may store information about a camera in a cloud server or the like. In this case, the electronic apparatus 100 may receive the view angle information of the camera from the external server. For example, the view angle information of the camera may include information such as sensor size, focal length, and the like. As shown in fig. 8b, the focal length and the viewing angle of the camera may be inversely proportional. In other words, the shorter the focal length, the wider the field of view, and the longer the focal length, the narrower the field of view.
Referring back to fig. 2, when acquiring the pose information of the user terminal 200 and the distance information between the user terminal 200 and the projection plane, the processor 120 may correct the third coordinate information based on the acquired information.
For example, the processor 120 may correct the third coordinate information to be rotated based on the posture information of the user terminal 200, correct the corrected third coordinate information to be rotated based on the distance information to be projected, and acquire the corrected third coordinate information.
The photographed image recognizes the coordinates of the projection plane of the camera, but in reality, it is not known how the projection plane of the projector image is positioned in three dimensions, and thus 3D rotation correction is necessary. However, if a ToF sensor is present, it may be known, but it is assumed that no ToF sensor is used. In addition, the method of generating a virtual image may be used by assuming that the projection plane is not tilted after correction and is perpendicular to the user gravity. For example, assume that four virtual points a1, a2, a3, and a4 are generated, and the Z-axis values thereof are all the same. In this case, pose information (i.e., the inverse of roll, pitch, and yaw values) may be applied as correction values, and the point angles b1, b2, b3, and b4 points of the camera imaging plane and the tilt relationship plane are acquired. A transformation formula from a plane including points b1, b2, b3, b4 to a plane including points a1, a2, a3, and a4 is obtained. Specifically, a transformation formula for rotation transformation such as the following equation 3 may be obtained.
"Eq.3"
Based on equation 3, the processor 120 may correct the third coordinate information to be rotated and acquire the corrected third coordinate information to be rotated. Therefore, the third coordinate information corrected to rotate, that is, the coordinates of the point in three dimensions can be acquired.
The processor 120 may calculate how to project the 3D coordinates acquired through the above-described rotation correction onto the actual camera imaging plane. In other words, the processor 120 may correct the 3D coordinates acquired through the rotation correction to project, and acquire the finally corrected third coordinate information. Referring to fig. 1a, points on 3D on the camera coordinate system may be projected onto the imaging plane along a virtual vanishing point line passing through the camera sensor. Thus, the processor 120 may calculate how points on the 3D camera coordinate system will be projected onto the 2D imaging plane, e.g., a two-dimensional coordinate system.
For example, if the projection reference point is set as the origin, a transformation formula for projecting the point P in 3D to P' may be given as shown in equation 4.
"Eq.4"
According to equation 4, when the projection plane is zc=d, (Xc, yc, zc, 1) may be projected as (Xc, yc, zc/d) = (d×xc/Zc, d×yc/Zc, 1).
As described above, the processor 120 may correct the third coordinate information corrected to rotate to project, and acquire the finally corrected third coordinate information.
However, in the above example, the projection correction is performed after the rotation correction, but the rotation correction may be performed after the projection correction.
Referring back to fig. 2, the processor 120 may acquire a transformation matrix, i.e., a second H matrix, based on the finally corrected third coordinate information and the vertex coordinates of the test image. Here, the finally corrected third coordinate information and the vertex coordinates of the test image may be normal coordinate system coordinates (or pixel coordinate system coordinates), respectively. For example, if the finally corrected third coordinate information, i.e., the four vertex coordinates are d1, d2 and d3, d4, and the four vertex coordinates of the actual projection point of the test image are e1, e2, e3 and e4, the second pair of HA matrices may be acquired based on (d 1, e 1), (d 2, e 2) and (d 3, e 4). For example, if it is an FHD resolution projector, e1, e2, e3, and e4 may be (0, 0), (1920,0), (0, 1080), and (1920, 1080).
The processor 120 may further identify a rectangular region of a maximum size corresponding to an aspect ratio of the input image within the identified region based on the corrected third coordinate information, and acquire fifth information corresponding to the identified rectangular region. For example, the fifth information may include coordinate information of each vertex of the identified rectangular region, and will be named as fifth coordinate information below for convenience of description.
In this case, the processor 120 may vertically and horizontally expand the rectangle to the same size from the center point of the vertex diagonal connection of the first region acquired based on the corrected third coordinate information, and identify whether the vertex of the rectangle intersects the side of the first region. In addition, the processor 120 may expand the smaller side of the rectangle and the larger side of the rectangle in a predetermined pixel unit to correspond to an aspect ratio when vertices of the expanded rectangle and sides of the first region intersect. The processor 120 may identify a rectangular region of maximum size at a location where a vertex corresponding to a diagonal of the expanded rectangle meets an edge of the first region.
Fig. 9 is a diagram illustrating an example method for identifying a maximum sized rectangular region in accordance with various embodiments.
As shown in fig. 9, when corrected third coordinate information d1, d2, d3, and d4 (i.e., vertices 911, 912, 913, and 914) is acquired, the rectangle may be expanded with a center point 920 that meets each vertex diagonal line as a starting point.
According to an embodiment, the rectangle expands upward, downward, leftward and rightward from the start point 920 to the same size, and identifies whether there is a portion beyond the projector projection plane 910. When there is no portion beyond the projection plane 910, it may be expanded by a predetermined ratio (e.g., 5%) of the screen and it is identified whether the vertices of the expanded rectangle intersect the edges of the projector projection plane.
When any one of the sides 932 of the projection plane 910 and the vertices 931, 932, 933, 934 of the rectangle 930 meet, the smaller side of the rectangle 930 may expand the rectangle in a predetermined pixel unit (e.g., 1 px), and the larger side of the rectangle 930 may expand the rectangle according to a ratio. For example, if one point of the projection plane 910 meets at the upper left, upper right, lower left and lower right sides of the rectangle 930, it is recognized whether the vertex and the projection plane 910 side meet by moving 1px to opposite sides, and whether there is a contact point by expanding the size by 1 px. If the vertices do not intersect diagonally (diagonal meet) in the upper left, upper right, lower left and lower right sides of the extension rectangle 930, the meeting point is identified by moving 1px laterally in opposite directions of vertical and horizontal, and whether or not the meeting point exists is identified by expanding the size by 1 px.
When vertices 942 and 943 existing on the diagonal of the expanded rectangle 930 meet the boundary line of the projection plane 910, the expansion may end, and coordinates g1, g2, g3, and g4 of the final vertices 941, 942, 943, and 944 may be acquired. Furthermore, when the rectangle size moves back to the point where the rectangle size moves under the same circumstances, the process may terminate so as to prevent and/or reduce the position of the rectangle from moving infinitely.
Referring back to fig. 2, the processor 120 may perform keystone correction by applying an inverse of the second H matrix to the acquired fifth coordinate information. For example, if the coordinate information of the vertices corresponding to the maximum square is g1, g2, g3, and g4, the inverse matrix of the second H matrix may be applied to acquire the coordinates of the projection area to be projected in the electronic device 100. In other words, when the electronic device 100 projects an image based on coordinates, the user can view a rectangular area of the maximum size.
However, in the above example, the vertex coordinates of the projection image are described as being corrected based on the pose information of the camera, but the marker coordinates may also be corrected based on the pose information of the camera. In this case, after the correction of the marker coordinates, the vertex coordinates of the projection image may be acquired based on the corrected marker coordinates. In other words, when the marker coordinates are corrected based on the pose information of the camera, it may not be necessary to correct the vertex coordinates based on the pose information of the camera.
Fig. 10 is a diagram illustrating an example projected image according to keystone correction in accordance with various embodiments.
In fig. 10, vertices 941, 942, 943, and 944 correspond to fifth coordinate information, and the area identified by the vertices may refer to a rectangular area of the maximum size acquired in fig. 9, for example. In this case, the processor 120 may apply an inverse of the second H matrix to the fifth coordinate information and determine coordinates of an image to be projected. In other words, the processor 120 may determine the inverse of the second H matrix at the coordinates of the vertices 941, 942, 943, and 944, and determine 951, 952, 953, and 954 as the vertex coordinates of the keystone corrected image. In this case, since the processor 120 projects an image based on vertices 951, 952, 953, and 954, the distorted image 950 is projected, but the user can view the rectangular image 960.
Fig. 11 is a block diagram illustrating an example configuration of an electronic device according to various embodiments.
According to fig. 11, an electronic device 100' includes an image projector 110, a processor (e.g., including processing circuitry) 120, a communication interface (e.g., including communication circuitry) 130, a user interface 140, a memory 150, and a sensor 160.
The image projector 110 can enlarge or reduce an image according to a distance from a projection plane (projection distance). In other words, the zoom function may be performed according to the distance from the projection plane. In this case, the zoom function may include a hardware method for adjusting the screen size by moving the lens, and a software method for adjusting the screen size by cropping the image. If the zoom function is performed, the focus of the image needs to be adjusted. For example, methods of adjusting the focus may include manual focusing methods, motorized methods, and the like.
In addition, the image projector 110 may provide a zoom/keystone/focus function by automatically analyzing the surrounding and projection environments without user input. For example, the projector 110 may automatically provide a zoom/keystone/focus function based on a distance between the electronic device 100 and a projection plane sensed by a sensor (a depth camera, a distance sensor, an infrared sensor, an illuminance sensor, etc.), information about a space in which the electronic device 100 is currently located, information about an amount of ambient light, etc.
According to an example of the apparatus of the electronic embodiment 100', at least one communication interface 130 (hereinafter, referred to as a communication interface) may be implemented as various interfaces. For example, the communication interface 110 may include various communication circuits and communicate with external devices (e.g., the user terminal 200), external storage media (e.g., USB memory), external servers (e.g., web hard drive) through a communication method such as AP-based Wi-Fi (wireless LAN network), bluetooth, zigbee, wired/wireless Local Area Network (LAN), wide Area Network (WAN), ethernet, IEEE 1394, high Definition Multimedia Interface (HDMI), universal Serial Bus (USB), mobile high definition link (MHL), institute of audio engineering/european broadcasting union (AES/EBU), optical, coaxial, and the like.
The user interface 140 may be implemented, for example and without limitation, as a button, touchpad, mouse, keyboard, etc., or may be implemented as a touch screen, remote transceiver capable of performing the display functions and manipulation input functions described above. The remote control transceiver may receive a remote control signal from an external remote control device or transmit a remote control signal through at least one communication method of infrared communication, bluetooth communication, or Wi-Fi communication.
The memory 150 may store data required by various embodiments of the present disclosure. Depending on the purpose of data storage, the memory 150 may be implemented in the form of a memory embedded in the electronic device 100', or may be implemented in the form of a memory detachable to the electronic device 100'. For example, data for driving the electronic device 100 'may be stored in a memory embedded in the electronic device 100', and data for expanding functions of the electronic device 100 'may be stored in a memory detachable from the electronic device 100'. Meanwhile, the memory embedded in the electronic device 100' may be implemented as a volatile memory (e.g., at least one of a Dynamic RAM (DRAM), a Static RAM (SRAM), or a Synchronous Dynamic RAM (SDRAM)), a nonvolatile memory (e.g., a one-time programmable ROM (OTPROM), a Programmable ROM (PROM), an Erasable Programmable ROM (EPROM), an Electrically Erasable Programmable ROM (EEPROM), a mask ROM, a flash memory (e.g., a NAND flash memory, a NOR flash memory, or the like), a hard disk drive, a Solid State Drive (SSD), or the like), an external memory (e.g., a USB memory) that may be connected to a USB port, or the like.
The sensor 160 may include various types of sensors such as, but not limited to, an acceleration sensor, a distance sensor, and the like.
According to an example of the electronic device 100', a speaker, a tuner, and a demodulation unit may be additionally included. A tuner (not shown) may receive a Radio Frequency (RF) broadcast signal by tuning all pre-stored channels in the RF broadcast signal or a channel selected by a user or received through an antenna. The demodulation unit (not shown) may receive and demodulate the digital IF signal DIF converted in the tuner, and perform channel decoding and the like. According to an embodiment, an input image received through a tuner may be processed by a demodulation unit (not shown) and provided to the processor 120 for tone mapping processing according to an embodiment of the present disclosure.
Fig. 12 is a block diagram illustrating an example configuration of a user terminal according to various embodiments.
According to fig. 12, a user terminal 200 may include a camera 210, a sensor 220, a display 230, a memory 240, a communication interface (e.g., including communication circuitry 250), a user interface 260, and a processor (e.g., including processing circuitry) 270.
The camera 210 may be turned on and photographing may be performed according to a predetermined event. The camera 210 may convert a photographed image into an electrical signal and generate image data based on the converted signal. For example, the object may be converted into an electrical image signal by, for example, but not limited to, a Coupled Charge Device (CCD), and the converted image signal may be amplified and converted into a digital signal and signal processing may be performed.
According to an embodiment, the camera 210 may acquire a photographed image by photographing a projection plane of the projected image.
The sensor 220 may include at least one acceleration sensor (or gravity sensor). For example, the acceleration sensor may be a three-axis acceleration sensor. The three axis acceleration sensor measures the gravitational acceleration of each axis and provides raw data to the processor 270.
In addition, according to an embodiment, the sensor 220 may further include at least one of a distance sensor, a geomagnetic sensor, or a gyro sensor. The distance sensor is an element for sensing a distance from the projection plane. The distance sensor may be implemented in various types, such as an ultrasonic sensor, an infrared sensor, a LIDAR sensor, a radar sensor, a photodiode sensor, and the like. Geomagnetic or gyroscopic sensors may be used to obtain deflection information.
The display 230 may be implemented as a display including self-light emitting elements or a display including a non-light emitting device and a backlight. For example, the display 230 may be implemented in various types of displays, such as, but not limited to, liquid Crystal Displays (LCDs), organic Light Emitting Diode (OLED) displays, light Emitting Diodes (LEDs), micro LEDs, mini LEDs, plasma Display Panels (PDPs), quantum Dot (QD) displays, quantum dot light emitting diodes (QLEDs), and the like. In the display 110, a driving circuit, a backlight unit, etc., which may be implemented in the form of an a-si TFT, a Low Temperature Polysilicon (LTPS) TFT, an organic TFT (OTFT), etc., may also be included. Meanwhile, the display 230 may be implemented as a touch screen combined with a touch sensor, a flexible display, a rollable display, a three-dimensional (3D) display, a display in which a plurality of display modules are physically connected, or the like. In addition, the display 230 is equipped with a touch screen, and may be implemented to execute programs using a finger or pen (e.g., a stylus).
Other embodiments of memory 240, communication interface 250, and user interface 260 are similar to communication interface 130, user interface 140, and memory 150, and thus a detailed description thereof may not be repeated.
According to an embodiment, the processor 270 may include various processing circuits, and when an image of a photographed projection plane is acquired by the camera 210, second coordinate information indicating a position of each of the plurality of marks in the photographed image is acquired, and the acquired second coordinate information may be transmitted to the electronic device 100.
According to an embodiment, the processor 270 may control the communication interface 250 to transmit the photographed image to the electronic device 100 without analyzing the photographed image. In this case, the electronic apparatus 100 may acquire second coordinate information indicating a position of each of the plurality of marks in the photographed image.
However, according to an embodiment, the processor 270 may perform some of the above-described various processes performed in the electronic apparatus 100 and transmit the result to the electronic apparatus 100.
Fig. 13 is a flow chart illustrating an example method according to a control electronics implementation of the device.
According to the control method described in the flowchart shown in fig. 13, a test image including a plurality of marks is projected onto a projection plane (S1310).
First information indicating a position of each of the plurality of marks in the test image and second information indicating a position of the plurality of marks in an image of the external device photographing projection plane may be identified (S1320).
Third information indicating the vertex position of the test image in the photographed image based on the first information and the second information may be acquired (S1330).
The third information may be corrected based on the posture information of the external device, and the keystone correction may be performed based on the corrected third information (S1340).
Further, in operation S1340, the third information may be corrected to rotate based on the posture information of the external device, and the third information corrected to rotate may be corrected to project, and the corrected third information may be acquired.
The pose information of the external device may include at least one of roll information, pitch information, or yaw information, and the roll information and the pitch information may be acquired by an acceleration sensor provided in the external device, and the yaw information may be acquired based on view angle information of a camera for photographing a projection plane in the external device.
In operation S1340, if it is recognized that the projection plane is recognized as the predetermined area based on the gesture information of the external device, at least one of roll information, pitch information, and yaw information may be acquired by changing a reference value of a gravitational direction in the output value of the acceleration sensor, and the third coordinate information may be corrected based on the acquired information.
In this case, each of the plurality of marks may be in the form of a pattern including a black area and a white area having a predetermined ratio in each of the plurality of directions.
In addition, each of the plurality of markers may be located in the inner region at a predetermined ratio with respect to four vertices of the test image. In this case, fourth information indicating the vertex position of the test image may be acquired based on the first information and the predetermined ratio, and third information may be acquired from the photographed image based on the fourth information and the first transformation matrix in operation S1330. In this case, the first transformation matrix may be acquired based on a mapping relationship between the first information and the second information.
In addition, in operation S1340, the second transformation matrix may be acquired based on the corrected third information and the vertex coordinate information of the test image, and a maximum size of a rectangular area corresponding to an aspect ratio of the input image in the area identified based on the corrected third information may be identified, and keystone correction may be performed.
In operation S1340, keystone correction may be performed by acquiring fifth information corresponding to the identified rectangular region and applying an inverse matrix of the second transformation matrix to the acquired fifth information.
Operation S1340 may include: the rectangle is expanded at a center point where vertices of the first region acquired based on the corrected third information diagonally meet, and whether the vertices of the expanded rectangle meet edges of the first region is identified, smaller edges of the rectangle are expanded in predetermined pixel units based on the vertices of the rectangle meeting edges of the first region, and larger edges of the rectangle are expanded to correspond to aspect ratios, and a rectangular region having a maximum size at a position where vertices corresponding to diagonals of the rectangle meet edges of the first region is identified.
Operation S1320 may include receiving second information indicating a position of each of the plurality of marks from the external device or acquiring the second information indicating the position of each of the plurality of marks based on the photographed image received from the external device.
The plurality of markers may be located in a plurality of different regions in the test image for distortion correction of the projection plane and may have different shapes to identify a location corresponding to each of the plurality of markers.
According to the various embodiments described above, when keystone correction is performed based on an image captured by an external device, accurate keystone correction can be performed by reflecting pose information of the external device. Therefore, user convenience can be improved.
The method according to the above-described example embodiments may be implemented as software or an application that may be installed in an existing electronic device. The methods according to various embodiments of the present disclosure may be performed using deep learning-based artificial neural networks (or deep artificial neural networks) (i.e., learning network models).
Furthermore, the method according to the above-described example embodiments may be implemented by upgrading software or hardware of an existing electronic device.
In addition, the various example embodiments described above may be performed by an embedded server provided in the electronic apparatus or a server external to the electronic apparatus.
The various example embodiments described above may be implemented as an S/W program comprising instructions stored on a machine-readable (e.g., computer-readable) storage medium. The machine is a device capable of calling stored instructions from a storage medium and operating according to the called instructions, and may include an electronic device according to the above-described example embodiments. When the instructions are executed by a processor, the processor may perform functions corresponding to the instructions using other component directories, or may perform functions under the control of the processor. The instructions may include code produced by a compiler or code executable by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. In this context, a "non-transitory" storage medium may not include a signal but be tangible, and does not distinguish between a case where data is semi-permanently stored in the storage medium and a case where data is temporarily stored in the storage medium.
According to example implementations, methods according to the various example embodiments described above may be provided as included in a computer program product. The computer program product may be transacted between a seller and a buyer. The computer program product mayIn the form of a machine-readable storage medium (e.g., compact disk read-only memory (CD-ROM)) or through an application Store (e.g., play Store) TM ) And (5) online distribution. In the case of online distribution, at least a portion of the computer program product may be at least temporarily stored or temporarily generated in a server of the manufacturer, a server of the application store, or a storage medium such as a memory.
Various components (e.g., modules or programs) according to various example embodiments may include a single entity or multiple entities, and some of the corresponding sub-components described above may be omitted, or another sub-component may be further added to various example embodiments. Alternatively or additionally, some components (e.g., modules or programs) may be combined to form a single entity that performs the same or similar functions as corresponding elements before being combined. According to various example embodiments, operations performed by modules, programs, or other components may be sequential, parallel, or both, performed iteratively or heuristically, or at least some operations may be performed in a different order, omitted, or other operations may be added.
While the present disclosure has been illustrated and described with reference to various exemplary embodiments, it is to be understood that the various exemplary embodiments are intended to be illustrative, and not limiting. It will be further understood by those skilled in the art that various changes in form and details may be made therein without departing from the true spirit and full scope of the disclosure, including the appended claims and their equivalents. It should also be understood that any of the embodiments described herein may be used in combination with any of the other embodiments described herein.

Claims (15)

1. An electronic device, comprising:
an image projector; and
a processor configured to: the image projector is controlled to project a test image comprising a plurality of marks onto a projection plane,
based on first information indicating positions of the plurality of marks in the test image and second information indicating positions of the plurality of marks in an image of a projection plane photographed by an external device, third information indicating positions of vertices of the test image in the photographed image is acquired,
correcting the third information based on the posture information of the external device
Keystone correction is performed based on the corrected third information.
2. The device according to claim 1,
Wherein the processor is configured to: correcting third information to be rotated based on posture information of the external device, and
the third information corrected to rotate is further corrected to project based on the distance information between the external device and the projection plane, and corrected third coordinate information is acquired.
3. The device according to claim 2,
wherein the gesture information of the external device includes at least one of roll information, pitch information, or yaw information,
wherein the roll information and the pitch information are acquired by an acceleration sensor provided in an external device, and
wherein the deflection information is acquired based on angle-of-view information of a camera for photographing a projection plane in the external device.
4. A device according to claim 3,
wherein the processor is configured to: identifying, as a specified area, based on gesture information of an external device based on the projection plane, acquiring at least one of roll information, pitch information, or yaw information by changing a reference value of a gravitational direction in output values of the acceleration sensor, and
the third coordinate information is corrected based on the acquired information.
5. The device according to claim 1,
Wherein each of the plurality of marks is in the form of a pattern including a black area and a white area having a specified ratio in each of a plurality of directions.
6. The device according to claim 1,
wherein each of the plurality of markers is located in the interior region at a specified ratio based on four vertices of the test image,
wherein the processor is configured to: acquiring fourth information indicating a vertex position of the test image based on the first information and the specified ratio, and
third information is acquired from the photographed image based on the fourth information and the first transformation matrix,
wherein the first transformation matrix is obtained based on a mapping relation between the first information and the second information.
7. The device according to claim 1,
wherein the processor is configured to: acquiring a second transformation matrix based on the corrected third information and vertex coordinate information of the test image, and
keystone correction is performed by identifying a rectangle having a maximum size corresponding to the aspect ratio of the input image in the identified area based on the corrected third information.
8. The device according to claim 7,
wherein the processor is configured to: acquiring fifth information corresponding to the identified rectangular region
Keystone correction is performed by applying the inverse of the second transformation matrix to the acquired fifth coordinate information.
9. The device according to claim 7,
wherein the processor is configured to: expanding the rectangle at a center point of a vertex of the first region acquired based on the corrected third information, and identifying whether the vertex of the expanded rectangle encounters an edge of the first region,
expanding a smaller side of the rectangle in a specified pixel unit based on the vertex of the rectangle encountering the side of the first region, and expanding a larger side of the rectangle to correspond to the aspect ratio, and
a rectangular region having a maximum size at a location where a vertex corresponding to a diagonal of the rectangle meets a side of the first region is identified.
10. The apparatus of claim 1, further comprising:
a communication interface including a communication circuit;
wherein the processor is configured to: receiving second information indicating positions of the plurality of marks from the external device through the communication interface, and
second information indicating positions of a plurality of marks is acquired based on a photographed image received from an external device through a communication interface.
11. The device according to claim 1,
Wherein the plurality of markers are located in a plurality of different areas in the test image, an
The plurality of marks have different shapes.
12. A method for controlling an electronic device, comprising:
projecting a test image comprising a plurality of markers onto a projection plane;
acquiring third information indicating positions of vertices of the test image in the photographed image based on the first information indicating positions of the plurality of marks in the test image and the second information indicating positions of the plurality of marks in the image of the projection plane photographed by the external device;
correcting the third information based on the posture information of the external device; and
keystone correction is performed based on the corrected third information.
13. The method of claim 12, comprising:
wherein correcting the third information includes correcting the third information to rotate based on posture information of the external device; and
the third information corrected to rotate is further corrected to project based on the distance information between the external device and the projection plane, and corrected third coordinate information is acquired.
14. The method according to claim 13,
wherein the gesture information of the external device includes at least one of roll information, pitch information, or yaw information,
Wherein the roll information and the pitch information are acquired by an acceleration sensor provided in an external device, and
wherein the deflection information is acquired based on angle-of-view information of a camera for photographing a projection plane in the external device.
15. The method according to claim 14,
wherein performing keystone correction comprises: identifying, as a specified area, based on gesture information of an external device based on the projection plane, acquiring at least one of roll information, pitch information, or yaw information by changing a reference value of a gravitational direction in output values of the acceleration sensor, and
the third coordinate information is corrected based on the acquired information.
CN202280039016.6A 2021-06-01 2022-02-22 Electronic device and control method thereof Pending CN117413511A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2021-0070793 2021-06-01
KR10-2021-0136104 2021-10-13
KR1020210136104A KR20220162595A (en) 2021-06-01 2021-10-13 Electronic apparatus and control method thereof
PCT/KR2022/002569 WO2022255594A1 (en) 2021-06-01 2022-02-22 Electronic apparatus and control method thereof

Publications (1)

Publication Number Publication Date
CN117413511A true CN117413511A (en) 2024-01-16

Family

ID=89496631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280039016.6A Pending CN117413511A (en) 2021-06-01 2022-02-22 Electronic device and control method thereof

Country Status (1)

Country Link
CN (1) CN117413511A (en)

Similar Documents

Publication Publication Date Title
CN107660337B (en) System and method for generating a combined view from a fisheye camera
CN110809786B (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
JP4435145B2 (en) Method and apparatus for providing panoramic image by calibrating geometric information
CN111935465B (en) Projection system, projection device and correction method of display image thereof
CN106846410B (en) Driving environment imaging method and device based on three dimensions
US20140267593A1 (en) Method for processing image and electronic device thereof
US9398278B2 (en) Graphical display system with adaptive keystone mechanism and method of operation thereof
EP3547260B1 (en) System and method for automatic calibration of image devices
US10855916B2 (en) Image processing apparatus, image capturing system, image processing method, and recording medium
CN112399158B (en) Projection image calibration method and device and projection equipment
CN111694528B (en) Typesetting identification method of display wall and electronic device using same
CN101656858A (en) Projection display apparatus and display method
US9691357B2 (en) Information processing method and electronic device thereof, image calibration method and apparatus, and electronic device thereof
JP2014192808A (en) Projection apparatus and program
US20240015272A1 (en) Electronic apparatus and control method thereof
CN114449249B (en) Image projection method, image projection device, storage medium and projection apparatus
CN114286068B (en) Focusing method, focusing device, storage medium and projection equipment
EP4283986A1 (en) Electronic apparatus and control method thereof
CN114286066A (en) Projection correction method, projection correction device, storage medium and projection equipment
JP4199641B2 (en) Projector device
KR20220162595A (en) Electronic apparatus and control method thereof
CN117413511A (en) Electronic device and control method thereof
JP7223072B2 (en) Roadside sensing method, roadside sensing device, electronic device, storage medium, roadside equipment, and program
CN114339179A (en) Projection correction method, projection correction device, storage medium and projection equipment
KR101920323B1 (en) System and method for generating logical display apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination