CN117812234A - Display apparatus and projection screen correction method - Google Patents
Display apparatus and projection screen correction method Download PDFInfo
- Publication number
- CN117812234A CN117812234A CN202311451371.7A CN202311451371A CN117812234A CN 117812234 A CN117812234 A CN 117812234A CN 202311451371 A CN202311451371 A CN 202311451371A CN 117812234 A CN117812234 A CN 117812234A
- Authority
- CN
- China
- Prior art keywords
- data
- coordinate
- correction
- display
- coordinate value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012937 correction Methods 0.000 title claims abstract description 236
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012795 verification Methods 0.000 claims description 22
- 230000000694 effects Effects 0.000 claims description 15
- 230000004044 response Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 description 21
- 230000003287 optical effect Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 230000005540 biological transmission Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3185—Geometric adjustment, e.g. keystone or convergence
Landscapes
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
The embodiment of the application provides display equipment and a projection picture correction method, and relates to the technical field of computers. The display device includes: a display configured to: displaying a first image for correcting the projection screen; a controller configured to: performing first correction on the acquired second image to generate correction data, wherein the second image is obtained by shooting the first image by a target user; converting the correction data into first coordinate data, and controlling the display to display the projection picture in a first display area determined based on the first coordinate data; and acquiring second coordinate data generated by performing second correction on the first display area, and controlling the display to display the projection picture in the second display area determined based on the second coordinate data. The method and the device are used for solving the problems of low efficiency of projection picture adjustment and poor user operation experience.
Description
Technical Field
The embodiment of the application relates to the technical field of computers. And more particularly, to a display apparatus and a projection screen correction method.
Background
Along with the development of the laser television, whether the picture projected by the laser television optical machine is attached to the laser screen or not seriously influences the user experience. At present, the projection picture is adjusted by two adjustment modes including automatic geometric correction and manual geometric correction, and the transmission of the optical machine and the laser screen is realized by a universal serial bus (Universal Serial Bus, USB) value transmission mode during correction, and the value transmission mode has the advantages of high transmission speed and large data volume and has higher data interaction efficiency with the optical machine.
However, according to the correction scheme realized based on the USB value transmission, because the data formats obtained by different adjustment modes are different, the manual geometric correction cannot use the data of the automatic geometric correction, and the user can only adjust the original picture projected by the camera by adopting the manual geometric correction mode, so that the user operation experience is poor and the adjustment efficiency is low.
Disclosure of Invention
The exemplary embodiment of the application provides a display device and a projection picture correction method, which are used for solving the problems of low adjustment efficiency of a projection picture and poor user operation experience.
The technical scheme provided by the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides a display device, including:
a display configured to:
displaying a first image for correcting the projection screen;
a controller configured to:
performing first correction on an acquired second image to generate correction data, wherein the second image is obtained by shooting the first image by a target user;
converting the correction data into first coordinate data and controlling the display to display a projection picture in a first display area determined based on the first coordinate data;
and acquiring second coordinate data generated by performing second correction on the first display area, and controlling the display to display a projection picture in a second display area determined based on the second coordinate data.
In a second aspect, an embodiment of the present application provides a projection screen correction method, applied to a display device, where the method includes:
displaying a first image for correcting the projection screen;
performing first correction on an acquired second image to generate correction data, wherein the second image is obtained by shooting the first image by a target user;
converting the correction data into first coordinate data and controlling the display to display a projection picture in a first display area determined based on the first coordinate data;
and performing second correction on the first display area to generate second coordinate data, and controlling the display to display a projection picture in the second display area determined based on the second coordinate data.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor, the memory for storing a computer program; the processor is configured to cause the electronic device to implement the projection screen correction method according to the second aspect or any embodiment of the second aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium, where a computer program is stored, where the computer program, when executed by a computing device, causes the computing device to implement the method for correcting a projection screen according to the second aspect or any embodiment of the second aspect.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a computer causes the computer to implement the projection screen correction method according to the second aspect or any embodiment of the second aspect.
As can be seen from the above technical solutions, according to the display device and the projection screen correction method provided by the embodiments of the present application, the display device responds to the start of automatic geometric correction, displays a first image for correcting a projection screen, then obtains a second image obtained by shooting the first image by a target user, performs first correction on the second image to generate correction data, and the first correction refers to automatic geometric correction; converting the correction data into first coordinate data, determining a first display area of a projection picture to be displayed in a display based on the first coordinate data, and displaying the projection picture subjected to automatic geometric correction in the first display area; the method and the device for correcting the image of the display equipment comprise the steps of obtaining second coordinate data generated by a target user for performing second correction on a first display area, wherein the second correction refers to manual geometric correction, determining a second display area of a projection image to be displayed in a display based on the second coordinate data, and displaying the projection image subjected to the manual geometric correction in the second display area.
Drawings
In order to more clearly illustrate the embodiments of the present application or the implementation in the related art, a brief description will be given below of the drawings required for the embodiments or the related art descriptions, and it is apparent that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings for those of ordinary skill in the art.
FIG. 1 illustrates a scene architecture diagram of a projection screen correction method in some embodiments;
FIG. 2 illustrates a hardware configuration block diagram of a control device in some embodiments;
FIG. 3 illustrates a hardware configuration block diagram of a display device in some embodiments;
FIG. 4 illustrates a software configuration diagram in a display device in some embodiments;
FIG. 5 is a flow chart illustrating steps of a method of projection screen correction in some embodiments;
FIG. 6 illustrates a coordinate schematic for a second correction in some embodiments;
FIG. 7 illustrates a layout of a television end in some embodiments;
FIG. 8 is a flow chart illustrating steps of a method of projection screen correction in some embodiments;
FIG. 9 is a schematic information flow diagram of a projection screen correction method in other embodiments;
FIG. 10 is a flowchart showing steps of a projection screen correction method in other embodiments;
FIG. 11 is a schematic information flow diagram of a projection screen correction method in other embodiments;
FIG. 12 is a flowchart showing steps of a projection screen correction method in other embodiments;
fig. 13 shows a flowchart of the steps of a projection screen correction method in other embodiments.
Detailed Description
For purposes of clarity and implementation of the present application, the following description will make clear and complete descriptions of exemplary implementations of the present application with reference to the accompanying drawings in which exemplary implementations of the present application are illustrated, it being apparent that the exemplary implementations described are only some, but not all, of the examples of the present application.
It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
Fig. 1 is a schematic view of a scene architecture of a projection screen correction method according to an embodiment of the present application. As shown in fig. 1, a scenario architecture provided in an embodiment of the present application includes: a server 100 and a display device 200.
The display device 200 provided in the embodiment of the application may have various implementation forms, for example, may be a smart speaker, a television, a refrigerator, a washing machine, an air conditioner, a smart curtain, a router, a set top box, a mobile phone, a personal computer (Personal Computer, PC) smart television, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), a wearable device, an on-board device, an electronic desktop (electronic table), and the like.
In some embodiments, the display device 200 may be in data communication with the server 100 upon receiving a voice command from a user. The display device 200 may be allowed to make a communication connection with the server 100 through a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and the voice command may be a command to turn on/off the projection screen correction, or the like.
The server 100 may be a server providing various services, such as a server providing support for audio data collected by the terminal device 200. The server may perform analysis and other processing on the received data such as audio, and feed back the processing result (e.g., endpoint information) to the terminal device. The server 100 may be a server cluster, or may be a plurality of server clusters, and may include one or more types of servers.
The display device 200 may be hardware or software. When the display device 200 is hardware, it may be various electronic devices having a sound collection function, including but not limited to a smart speaker, a smart phone, a television, a tablet computer, an electronic book reader, a smart watch, a player, a computer, an AI device, a robot, a smart vehicle, and the like. When the display device 200 is software, it can be installed in the above-listed electronic device. Which may be implemented as a plurality of software or software modules (e.g. for providing sound collection services) or as a single software or software module. The present invention is not particularly limited herein.
It should be noted that, the projection screen correction method provided in the embodiment of the present application may be executed by the server 100, may be executed by the display device 200, or may be executed by both the server 100 and the display device 200, which is not limited in this application.
Fig. 2 shows a hardware configuration block diagram of a display device 200 in accordance with an exemplary embodiment. The display apparatus 200 shown in fig. 2 includes at least one of a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface 280. The controller includes a central processing unit, an audio processor, a RAM, a ROM, and first to nth interfaces for input/output.
The communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display device 200 may establish transmission and reception of control signals and data signals through the communicator 220 server 100.
The user interface 280 may be used to receive external control signals.
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
The sound collector may be a microphone, also called "microphone", which may be used to receive the sound of a user and to convert the sound signal into an electrical signal. The display device 200 may be provided with at least one microphone. In other embodiments, the display device 200 may be provided with two microphones, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the display device 200 may also be provided with three, four, or more microphones to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
Further, the microphone may be built in the display device 200, or connected to the display device 200 by a wire or wirelessly. Of course, the position of the microphone on the display device 200 is not limited in the embodiment of the present application. Alternatively, the display device 200 may not include a microphone, i.e., the microphone is not provided in the display device 200. The display device 200 may be coupled to a microphone (also referred to as a microphone) via an interface such as the USB interface 130. The external microphone may be secured to the display device 200 by external fasteners such as a camera mount with clips.
The controller 250 controls the operation of the display device and responds to the user's operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200.
In some embodiments the controller includes at least one of a central processing unit (Central Processing Unit, CPU), video processor, audio processor, RAM Random Access Memory, RAM), ROM (Read-Only Memory), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
In some examples, the operating system of the smart device is, for example, an Android system, and as shown in fig. 3, the display device 200 may be logically divided into an application layer (Applications) 21, a kernel layer 22 and a hardware layer 23.
Wherein, as shown in fig. 3, the hardware layers may include the controller 250, the communicator 220, the detector 230, etc. shown in fig. 2. The application layer 21 includes one or more applications. The application may be a system application or a third party application. For example, the application layer 21 includes a voice recognition application that can provide a voice interactive interface and services for connection of the display device 200 to the server 100.
The kernel layer 22 acts as software middleware between the hardware layer and the application layer 21 for managing and controlling hardware and software resources.
In some examples, the kernel layer 22 includes a detector driver for sending voice data collected by the detector 230 to a voice recognition application. Illustratively, the voice recognition application in the display device 200 is launched, and in the event that the display device 200 establishes a communication connection with the server 100, the detector driver is configured to send the voice data collected by the detector 230 and input by the user to the voice recognition application. The speech recognition application then sends the query information containing the speech data to the intent recognition module 202 in the server. The intention recognition module 202 is used to input voice data transmitted from the display device 200 to the intention recognition model.
In order to clearly illustrate the embodiments of the present application, a voice recognition network architecture provided in the embodiments of the present application is described below with reference to fig. 4.
For example, referring to fig. 4, the display device includes a first correction module 401, a conversion module 402, and a second correction module 403, the first correction module 401 is used for performing automatic geometric correction, the conversion module 402 is used for converting correction data output by the first correction module 401 into coordinate data usable in the second correction module 403, and the second correction module 403 is used for performing projection based on the coordinate data for fine adjustment by a user based on the first correction. In one embodiment, there may be multiple entity service devices deployed with different service services in the architecture shown in fig. 4, and one or more entity service devices may also aggregate one or more functional services.
In some embodiments, the following describes an example of a process of processing information input to a display device based on the architecture shown in fig. 4, taking as an example a voice command input through voice as an example the information input to the display device:
[ Speech recognition ]
The display device may perform noise reduction processing and feature extraction on the audio of the voice instruction after receiving the voice instruction input through the voice, where the noise reduction processing may include steps of removing echo and environmental noise, and the like.
Semantic understanding
Natural language understanding is performed on the identified candidate text and associated context information using acoustic and language models, and the text is parsed into structured, machine-readable information, information such as business fields, intentions, word slots, etc., to express semantics, etc. Deriving an actionable intent determination intent confidence score, the semantic understanding module selecting one or more candidate actionable intents based on the determined intent confidence score,
[ business management ]
The semantic understanding module issues an execution instruction to the corresponding service management module according to the semantic analysis result of the text of the voice instruction so as to execute the operation corresponding to the voice instruction, and the user requests the operation, and feeds back the execution result of the operation corresponding to the voice instruction.
In some embodiments, when the display apparatus 200 displays a first image for correcting a projection screen through the display 260, the display apparatus 200 generates correction data by performing a first correction on an acquired second image, which is obtained by photographing the first image by a target user, through the controller 250; converting the correction data into first coordinate data and controlling the display to display a projection picture in a first display area determined based on the first coordinate data; and acquiring second coordinate data generated by performing second correction on the first display area, and controlling the display to display a projection picture in a second display area determined based on the second coordinate data.
In some embodiments, the controller 250 acquires initial coordinate data for the second correction, wherein the initial coordinate data includes first data of a plurality of coordinate points; determining offset data corresponding to each coordinate point in the correction data; and calculating second data according to the first data and the offset data for each coordinate point, wherein the first coordinate data comprises the second data.
In some embodiments, the manner in which the controller 250 converts the correction data into the first coordinate data may be: acquiring initial coordinate data for second correction, wherein the initial coordinate data comprises first data of a plurality of coordinate points; determining offset data corresponding to each coordinate point in the correction data; and calculating second data according to the first data and the offset data for each coordinate point, wherein the first coordinate data comprises the second data.
In some embodiments, the correction data includes a plurality of correction points, the plurality of coordinate points including a first type of coordinate point and a second type of coordinate point, the first type of coordinate point being movable along a first preset axis and a second preset axis, the second type of coordinate point being movable along the first preset axis or the second preset axis.
In some embodiments, the manner of determining the offset data corresponding to each coordinate point in the correction data by the controller 250 may be: determining the two correction points as offset data corresponding to the first type coordinate points; determining one correction point as offset data corresponding to the second class coordinate point; wherein, each correction point has a unique corresponding coordinate point.
In some embodiments, the offset data corresponding to the first type of coordinate point includes a first offset and a second offset, the first type of coordinate point includes a first coordinate value on the first preset axis and a second coordinate value on a second preset axis, and the second type of coordinate point includes a third coordinate value on the first preset axis or the second preset axis.
In some embodiments, the manner in which the controller 250 calculates the second data according to the first data and the offset data may be: for each first class coordinate point, calculating to obtain a fourth coordinate value based on the first offset and the first coordinate value, and calculating to obtain a fifth coordinate value based on the second offset and the second coordinate value; for each second type coordinate point, calculating a sixth coordinate value based on offset data corresponding to the second type coordinate point and the third coordinate value; wherein the second data includes the fourth coordinate value and the fifth coordinate value and/or the sixth coordinate value.
In some embodiments, the implementation manner of calculating the fourth coordinate value by the controller 250 based on the first offset and the first coordinate value, and calculating the fifth coordinate value based on the second offset and the second coordinate value may be: if the first coordinate value is the boundary maximum value on the first preset axis, calculating the difference value between the first coordinate value and the first offset to obtain a fourth coordinate value; or if the first coordinate value is not the boundary maximum value, calculating the sum of the first coordinate value and the first offset to obtain the fourth coordinate value.
In some embodiments, the manner in which the controller 250 calculates the sixth coordinate value based on the offset data corresponding to the second class coordinate point and the third coordinate value may be: if the second coordinate value is the boundary maximum value on the second preset axis, calculating the difference value between the second coordinate value and the second offset to obtain a fifth coordinate value; or if the second coordinate value is not the boundary maximum value, calculating the sum of the second coordinate value and the second offset to obtain the fifth coordinate value.
In some embodiments, controller 250 may also turn off the eye protection function in response to an initiation request for the first correction; and responding to the first corrected closing request, and starting the human eye protection function.
In some embodiments, before the display 260 displays the first image for correcting the projection screen, the controller 250 may further control the display to display the verification code input interface in response to an access request of the target terminal; acquiring input information submitted by the target terminal based on the verification code input interface, and verifying the input information; if the verification is successful, judging whether the access request is the first request received by the controller; if yes, the display is controlled to display a first image for correcting the projection picture.
In some embodiments, the controller 250 may further obtain a second image and parse the correction points on the second image to generate parsed data; correcting the projection picture based on the analysis data to generate a correction effect; and under the condition that the correction effect meets the preset effect, storing the analysis data as correction data into a preset storage space.
In some embodiments, the manner in which the controller 250 obtains the correction data may be: judging whether the preset storage space successfully stores the correction data or not, if so, acquiring the correction data from the preset storage space, and converting the correction data into first coordinate data; if not, acquiring preset coordinate data, and taking the preset coordinate data as the first coordinate data.
Fig. 5 is a schematic flow chart illustrating a projection screen correction method according to an embodiment of the present application, which is applied to the display device 200 shown in fig. 2, and as shown in fig. 5, the projection screen correction method according to an embodiment of the present application may be implemented specifically by the following steps:
s501, a first image for correcting a projection screen is displayed.
The display device may directly receive the voice command of the user through an internally configured module for acquiring the voice command, may receive the voice command of the user through a voice control device set outside the display device, may receive the voice command of the user through a controller, an intelligent device, or the like, and the voice command may be an on automatic geometry correction command.
It will be appreciated that the display device displays a first image, which may be an image with a plurality of correction points, in response to an activation operation of the automatic geometric correction.
S502, performing first correction on the acquired second image to generate correction data.
The second image is obtained by shooting the first image by a target user.
In some embodiments, a second image is acquired, the correction point in the second image is parsed to generate parsed data, the parsed data is sent to the optical engine to adjust the projection picture, so as to obtain a correction effect, if the target user determines the correction effect, the parsed data is used as correction data, wherein the first correction refers to automatic geometric correction, and the automatic geometric correction refers to a correction mode that the optical engine or the display device automatically adjusts the projection picture based on the image shot by the user.
S503, converting the correction data into first coordinate data, and controlling the display to display a projection picture in a first display area determined based on the first coordinate data.
In some embodiments, the specific implementation procedure of converting the correction data obtained through the automatic geometric correction into the coordinate data adapted to the manual geometric correction in the above step S503 is as follows:
acquiring initial coordinate data for second correction, wherein the initial coordinate data comprises first data of a plurality of coordinate points; determining offset data corresponding to each coordinate point in the correction data; and calculating second data according to the first data and the offset data for each coordinate point, wherein the first coordinate data comprises the second data.
It can be understood that initial coordinate data for the second correction is obtained, the second correction refers to manual geometric correction, the manual geometric correction refers to that a user adjusts the direction of each coordinate point through a direction key on a remote control device, so as to adjust a projection picture, the initial coordinate data can be understood as first data of a plurality of coordinate points which can be adjusted by the user by self, the first data of the plurality of coordinate points can be specifically 8 coordinate data, and the pixel points and the coordinate points are in one-to-one correspondence.
For example, as shown in fig. 6, 8 coordinate data are denoted as a-G, a moving range of 8 coordinates can be manually adjusted by a user, a rectangular area formed by 8 coordinates also has a center point Z, wherein the center point Z coordinate is fixed, the moving range of 8 coordinates is adjusted around the center point Z, in a rectangular frame shown in fig. 6, an upper boundary up-and-down moving range is 180, a lower boundary up-and-down moving range is 70, and a left-and-right moving range is 220, wherein the rectangular frame is a target coordinate system constructed by taking a coordinate a as an origin, an upper boundary as an X-axis, and a left boundary as a Z-axis. Specifically, in the target coordinate system, the first data of the upper left vertex A is (0, 0), the moving range is x (0-220), and y: (0-180), the first data of the upper boundary midpoint E is (1919,0), and only moves up and down in the Y axis, the movement range is x:1919, y: first data of the right top vertex B is (3839,0), the moving range is X (3619-3839), y (0-180), first data of the left boundary midpoint G is (0, 1079), and the moving range is X: (-220), y:1079, the first data of the center point Z is (1919, 1079), and the coordinates are not corrected, the first data of the right boundary midpoint H is (3839, 1079), moving only on the X-axis, X: (3619-4059), y:1079, the first data of the left lower vertex C is (0, 2159), the moving range is x (0-220), Y (2089-2159), the first data of the lower boundary midpoint F is (1919, 2159) and moves only on the Y axis, x is 1919, Y (2089-2229), the first data of the right lower vertex D is (3839, 2159), the moving range is x (3619-3839), Y (2089-2159).
As shown in fig. 7, a layout diagram of a position of a coordinate point at a television end is illustrated, and the manual geometric correction is performed by adjusting a direction key of a remote controller for each pixel point/coordinate point, wherein four vertexes (a, B, C, D) can be adjusted in four directions, that is, a projection screen can be enlarged and reduced based on the four vertexes, and two midpoints (E, F) on an upper boundary and a lower boundary can be adjusted only in a positive and negative direction in a vertical direction, positive and negative means that the two midpoints (G, H) can be adjusted only in a positive and negative direction in a horizontal direction, and positive and negative means that the two midpoints (G, H) can be moved left and right.
The correction data comprises a plurality of correction points, the plurality of coordinate points comprise a first type of coordinate points and a second type of coordinate points, the first type of coordinate points can move along a first preset axis and a second preset axis, and the second type of coordinate points can move along the first preset axis or the second preset axis.
In some embodiments, the determining the offset data corresponding to each coordinate point in the correction data may be specifically implemented by the following steps:
determining the two correction points as offset data corresponding to the first type coordinate points; determining one correction point as offset data corresponding to the second class coordinate point; wherein, each correction point has a unique corresponding coordinate point.
It will be understood that after the first correction is completed, correction data is obtained, and offset data corresponding to each coordinate point, that is, offset data corresponding to each of the 8 coordinate data a-G in the above example, is determined in the correction data, where four vertices (a, B, C, D) may be moved along the X-axis and the Y-axis, and are marked as first type coordinate points, the X-axis is a first preset axis, the Y-axis is a second preset axis, and midpoints (E, F, G, H) on the boundary may be moved along the X-axis or the Y-axis, and are marked as second type coordinate points. In a possible embodiment, the correction data includes 12 correction points, each correction point has a correction value, the 12 correction points are sequentially ordered in the correction data and have corresponding serial numbers, for example, serial numbers of the 12 correction points are 0-11, two adjacent correction points are determined as offset data corresponding to a first class of coordinate points, for example, the values of the correction points of serial numbers 0 and 1 (data [0] and data [1], where data is the correction data, and 0 and 1 are serial numbers) are determined as offset data corresponding to a first class of coordinate points, and similarly, the values of the correction points of serial numbers 2 and 3 are determined as offset data corresponding to a first class of coordinate points B, the values of the correction points of serial numbers 4 and 5 are determined as offset data corresponding to a first class of coordinate points C, the values of the correction points of serial numbers 6 and 7 are determined as offset data corresponding to a first class of coordinate points D, the values of the correction points of serial numbers 8 are determined as offset data corresponding to a second class of coordinate points E, the values of the correction points of serial numbers 9 are determined as offset data corresponding to a second class of coordinate points F, and the correction points of serial numbers of the correction points of serial numbers of 0 and the correction points of the first class of coordinate points are also determined as offset data corresponding to a unique class of serial numbers.
The offset data corresponding to the first type of coordinate points comprise a first offset and a second offset, the first type of coordinate points comprise a first coordinate value on a first preset axis and a second coordinate value on a second preset axis, and the second type of coordinate points comprise a third coordinate value on the first preset axis or the second preset axis.
In some embodiments, the calculating the second data according to the first data and the offset data may be specifically implemented by the following steps:
for each first class coordinate point, calculating to obtain a fourth coordinate value based on the first offset and the first coordinate value, and calculating to obtain a fifth coordinate value based on the second offset and the second coordinate value; for each second type coordinate point, calculating a sixth coordinate value based on offset data corresponding to the second type coordinate point and the third coordinate value; wherein the second data includes the fourth coordinate value and the fifth coordinate value and/or the sixth coordinate value.
In some embodiments, the calculating the fourth coordinate value based on the first offset and the first coordinate value may be specifically implemented by the following steps:
If the first coordinate value is the boundary maximum value on the first preset axis, calculating the difference value between the first coordinate value and the first offset to obtain a fourth coordinate value; or if the first coordinate value is not the boundary maximum value, calculating the sum of the first coordinate value and the first offset to obtain the fourth coordinate value; if the second coordinate value is the boundary maximum value on the second preset axis, calculating the difference value between the second coordinate value and the second offset to obtain a fifth coordinate value; or if the second coordinate value is not the boundary maximum value, calculating the sum of the second coordinate value and the second offset to obtain the fifth coordinate value.
It is understood that the offset data corresponding to the first type of coordinate point includes a first offset and a second offset, for example, the first offset corresponding to the first type of coordinate point a is denoted as offsetAh, which may be understood as a distance that the coordinate point moves on the X-axis, and the second offset is denoted as offsetAv, which may be understood as a distance that the coordinate point moves on the Y-axis. The first type coordinate point comprises a first coordinate value and a second coordinate value, and for the first type coordinate point, a fourth coordinate value is calculated based on the first offset and the first coordinate value, and a fifth coordinate value is calculated based on the second offset and the second coordinate value. If there is no boundary maximum value in the first coordinate value and the second coordinate value, the boundary maximum value refers to the maximum value in the X-axis or the Y-axis, for example, the first Data of the first coordinate point a is (0, 0), if there is no boundary maximum value, the sum of the first offset and the first coordinate value is calculated to obtain a fourth coordinate value, for example, data [0] '=data [0] +offsetah, where Data [0] is the fourth coordinate value, data [0] is the first coordinate value, and the sum of the second offset and the second coordinate value is calculated to obtain a fifth coordinate value, for example, data [1]' =data [1] +offsetav, where Data [1] is the fifth coordinate value, data [1] is the second coordinate value, and Data [0] 'and Data [1]' constitute the second Data of the first coordinate point a. If at least one boundary maximum value exists in the first coordinate value and the second coordinate value, for example, the first Data of the first class coordinate point B is (3839,0), and the maximum value exists on the X axis, calculating the difference between the first offset and the first coordinate value to obtain a fourth coordinate value, for example, data [4] '=data [4] -offsetBh, where Data [4]' is the fourth coordinate value on the X axis, data [4] is the first coordinate value, offsetBh is the first offset, calculating the sum of the second offset and the second coordinate value to obtain a fifth coordinate value, for example, data [5] '=data [5] +offsetbhv, where Data [5]' is the fifth coordinate value on the Y axis, data [5] is the second coordinate value, offsetBh is the second offset, data [5] 'and Data [4]' form the second Data of the first class coordinate point B, and the second Data of the first class coordinate C and the first class D can be calculated based on the calculation rule, and the specific calculation method is not limited. The second data of the 8 coordinate points constitute the first coordinate data.
It is understood that, for the second type coordinate point, since the second type coordinate point moves only on the X-axis or the Y-axis, the sixth coordinate value may be calculated based on the offset Data corresponding to the second type coordinate point and the third coordinate value, for example, the second type coordinate point E moves only on the Y-axis and the value on the X-axis is fixed, so that Data [3] '=data [3] +middleup, where Data [3] is the third coordinate value, that is, the specific value on the Y-axis, data [3]' is the sixth coordinate value, middleUp is the offset Data of the second type coordinate point E, and the calculation manner of the second Data of other second type coordinate points is the same as that of the second type coordinate point E, which is not described herein.
S504, performing second correction on the first display area to generate second coordinate data, and controlling the display to display a projection picture in a second display area determined based on the second coordinate data.
It can be understood that the first display area is an area formed by 8 coordinate points and a center point in the first coordinate data, after the correction data is obtained or the first coordinate data is obtained by conversion, the manual geometric correction (the second correction) is started, before the rendering of the manual geometric correction is completed, the first coordinate data or the first display area determined based on the first coordinate data is sent to the optical machine, the optical machine displays the projection picture in the first display area, and the projection picture can be shown in fig. 7. And then, the user adjusts 8 coordinate points of the first display area to generate second coordinate data, and sends the second coordinate data or the second display area determined based on the second coordinate data to the optical machine, the optical machine projects a picture in the second display area, that is, after automatic geometric correction is completed, the user can perform manual geometric correction on the basis of correction data obtained by the automatic geometric correction, that is, the manual geometric correction can use the data of the automatic geometric correction to improve correction efficiency.
According to the projection picture correction method, when the display device receives the voice command of the automatic geometric correction input by the target user, the first image is displayed, the target user shoots the first image through the target terminal based on the relative position with the display, the first shot second image is submitted to the controller, the controller obtains initial coordinate data for manual geometric correction, the offset data corresponding to each coordinate point in the initial coordinate data are determined based on correction values of all correction points in the correction data, then the second data are obtained for each coordinate point through calculation based on the offset data and the first data of each coordinate point, namely, the correction data are converted into first coordinate data which can be used for manual geometric correction, and before the page is rendered by the manual geometric correction, the first coordinate data are sent to the optical machine, the optical machine projects pictures to the screen based on the first coordinate data, namely, the manual geometric correction can be adjusted based on correction results after the automatic geometric correction, the optical machine can be quickly adjusted to enable the initial coordinate data to be projected into the screen, the user can conveniently fine-adjust the screen based on the correction results, the user can expect that the manual geometric correction is needed to be quickly adjusted based on the manual geometric correction, the user is expected to be more quickly, the user is subjected to the manual geometric correction, and the user is expected to have high-grade geometric correction effect.
On the basis of the above embodiment, as shown in fig. 8, before the first correction is started, the display device interacts with the eye protection function, and the specific interaction flow may be implemented by the following steps:
s801, responding to the starting request of the first correction, and closing the human eye protection function.
S802, responding to the first corrected closing request, and starting the human eye protection function.
At present, in the automatic geometric correction process, the human eye protection function of the display device is started, and the mechanism of the human eye protection function means that when the distance between a user and the optical machine is smaller than a first distance, the human eye protection is triggered, so that the screen is black, the screen cannot display a first image under the condition, the user cannot shoot the first image, and the automatic geometric correction process cannot be started.
It can be appreciated that after the first correction is started, a first system attribute is issued to a first service implementing the eye protection function in response to a start request of the first correction, where the first system attribute is used to instruct the first service to close the eye protection function. Subsequently, in the event of failure of the eye protection function, a first correction process is performed. After the first correction is finished, a second system attribute is issued to the first service in response to a closing request of the first correction, wherein the second system attribute is used for indicating the first service to start a human eye protection function, and specifically, the human eye protection function can respond according to a state of a human eye protection switch.
For example, referring to fig. 9, after the television end starts automatic geometric correction and is about to display an image in the foreground, a first system attribute, for example, show=1, is sent to the eye protection function service, and when the eye protection function fails, a second system attribute, for example, show=0, is sent to the eye protection function service, and the eye protection function is restored.
According to the technical scheme, in the projection picture correction method provided by the embodiment of the application, in the process of starting the automatic geometric correction, the human eye protection function is disabled, so that the situation that a user is close to the optical machine when shooting a first image, and the human eye protection function is triggered to cause the screen to be black is avoided, and the automatic geometric correction is suspended or fails.
On the basis of the above embodiment, as shown in fig. 10, the display device may further perform security verification before displaying the first image, which may be specifically implemented by the following steps:
s1001, responding to an access request of a target terminal, and controlling the display to display a verification code input interface.
S1002, acquiring input information submitted by the target terminal based on the verification code input interface, and verifying the input information.
S1003, if verification is successful, judging whether the access request is the first request received by the controller.
And S1004, if yes, controlling the display to display a first image for correcting the projection picture.
When the target user accesses the automatic geometric correction service in the display device through the browser in the target terminal, the display device responds to the access request sent by the target terminal and controls the display to display the verification code input interface. The target user inputs the television terminal verification code on the target terminal based on the displayed verification code input interface, and submits the input information to the display device, the display device verifies the input information, and the specific verification method is not limited. And if the verification is successful, displaying an image uploading interface, wherein the image uploading interface displays a first image for correcting the projection picture, shooting the displayed first image by a target user to obtain a second image, and uploading the second image to the controller. In addition, before displaying the image uploading interface, after determining that the input information is correct, it is also determined whether the target terminal is the first terminal sending the access request, that is, whether the access request is the first request, in the actual verification, there may be a case that multiple terminal access services verify, so it needs to determine whether the target terminal is the first terminal sending the access request, or whether the target terminal is the first terminal that verifies successfully, if so, the first image is displayed to start automatic geometric correction, by setting a security correction mechanism, it is ensured that only one target terminal in the automatic correction process is successful in code scanning correction, and the correction security is ensured, and if other terminals are also successful in correction, but because the target terminal is successful in the first correction, other terminals cannot correct the projection image, so as to ensure the correction security of the target terminal.
For example, referring to fig. 11, a timing chart of security check performed by the mobile phone end and the television end includes the following specific procedures: the mobile phone end and the television end are connected with the same communication network (WiFi network), the mobile phone end scans the two-dimension code of the television end to access the service of the television end, the television end displays an authentication code input interface, a user inputs the authentication code (which can be a four-bit authentication code) displayed by the television end at the mobile phone end, the input authentication code is submitted to the television end, and the television end authenticates the authentication code. Then, if the verification is successful, judging whether the target terminal is successful in the first verification, and if the target terminal is successful in the first verification, displaying a first image; if the target terminal is not successful in the first verification, the first image is not displayed, and the target terminal returns to the verification code submitting interface.
According to the technical scheme, the projection picture correction method provided by the embodiment of the application adds a safety verification mechanism when automatic correction is started, and simultaneously judges whether the terminal which is successfully verified is the first terminal which is successfully corrected, so that only one terminal can be ensured to be corrected by scanning codes, and the correction safety is further ensured.
On the basis of the above embodiments, the present embodiment provides another method for correcting a projection screen, and referring to fig. 12, a flow of automatic geometric correction performed by a display device is specifically as follows:
S1201, acquiring a second image, analyzing correction points on the second image, and generating analysis data.
S1202, correcting the projection picture based on the analysis data to generate a correction effect.
S1203, storing the analysis data as correction data in a preset storage space when the correction effect meets a preset effect.
It can be understood that the terminal shoots a first image projected onto a screen by the optical machine, obtains a second image, obtains the second image, uploads the second image to the television algorithm library for analysis, analyzes the second image to generate analysis data comprising 12 correction points, sends the analysis data to the optical machine for projection picture adjustment, and displays the adjusted projection picture, if a user determines that the correction effect or the correction effect meets a preset effect, the analysis data comprising 12 correction points is used as correction data and stored in a preset storage space to complete automatic geometric correction, and the preset storage space can be a local storage space.
In some embodiments, referring to fig. 13, after the display device completes the automatic geometric correction, the specific implementation steps for determining the first coordinate data are as follows:
judging whether the preset storage space successfully stores the correction data or not, if so, acquiring the correction data from the preset storage space, and converting the correction data into first coordinate data; if not, acquiring preset coordinate data, and taking the preset coordinate data as the first coordinate data.
It can be understood that whether the correction data successfully stored in the automatic geometric correction exists or not is judged, if yes, the correction data are converted into first coordinate data required by the second correction, and if no, the preset coordinate data are used as the first coordinate data.
In some embodiments, embodiments of the present application further provide an electronic device, including: a memory and a processor, the memory for storing a computer program; the processor is configured to cause the electronic device to implement the projection screen correction method according to any one of the above embodiments when executing the computer program.
In some embodiments, embodiments of the present application provide a computer readable storage medium having a computer program stored thereon, which when executed by a computing device, causes the computing device to implement the projection screen correction method according to any of the above embodiments.
In some embodiments, embodiments of the present application provide a computer program product, which when run on a computer causes the computer to implement the projection screen correction method according to the second aspect or any embodiment of the second aspect.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.
Claims (10)
1. A display device, characterized by comprising:
a display configured to:
Displaying a first image for correcting the projection screen;
a controller configured to:
performing first correction on an acquired second image to generate correction data, wherein the second image is obtained by shooting the first image by a target user;
converting the correction data into first coordinate data and controlling the display to display a projection picture in a first display area determined based on the first coordinate data;
and acquiring second coordinate data generated by performing second correction on the first display area, and controlling the display to display a projection picture in a second display area determined based on the second coordinate data.
2. The display device of claim 1, wherein the controller is further configured to:
acquiring initial coordinate data for second correction, wherein the initial coordinate data comprises first data of a plurality of coordinate points;
determining offset data corresponding to each coordinate point in the correction data;
and calculating second data according to the first data and the offset data for each coordinate point, wherein the first coordinate data comprises the second data.
3. The display device according to claim 2, wherein the correction data includes a plurality of correction points, the plurality of coordinate points including a first type of coordinate point and a second type of coordinate point, the first type of coordinate point being movable along a first preset axis and a second preset axis, the second type of coordinate point being movable along the first preset axis or the second preset axis;
The controller is further configured to:
determining the two correction points as offset data corresponding to the first type coordinate points;
determining one correction point as offset data corresponding to the second class coordinate point;
wherein, each correction point has a unique corresponding coordinate point.
4. The display device according to claim 3, wherein the offset data corresponding to the first type of coordinate point includes a first offset amount and a second offset amount, the first type of coordinate point includes a first coordinate value on the first preset axis and a second coordinate value on the second preset axis, and the second type of coordinate point includes a third coordinate value on the first preset axis or the second preset axis;
the controller is further configured to:
for each first class coordinate point, calculating to obtain a fourth coordinate value based on the first offset and the first coordinate value, and calculating to obtain a fifth coordinate value based on the second offset and the second coordinate value;
for each second type coordinate point, calculating a sixth coordinate value based on offset data corresponding to the second type coordinate point and the third coordinate value;
wherein the second data includes the fourth coordinate value and the fifth coordinate value and/or the sixth coordinate value.
5. The display device of claim 4, wherein the controller is further configured to:
if the first coordinate value is the boundary maximum value on the first preset axis, calculating the difference value between the first coordinate value and the first offset to obtain a fourth coordinate value; or if the first coordinate value is not the boundary maximum value, calculating the sum of the first coordinate value and the first offset to obtain the fourth coordinate value;
if the second coordinate value is the boundary maximum value on the second preset axis, calculating the difference value between the second coordinate value and the second offset to obtain a fifth coordinate value; or if the second coordinate value is not the boundary maximum value, calculating the sum of the second coordinate value and the second offset to obtain the fifth coordinate value.
6. The display device of claim 1, wherein the controller is further configured to:
closing the eye protection function in response to the first corrected start request;
and responding to the first corrected closing request, and starting the human eye protection function.
7. The display device of claim 1, wherein the controller is further configured to:
Responding to an access request of a target terminal, and controlling the display to display a verification code input interface;
acquiring input information submitted by the target terminal based on the verification code input interface, and verifying the input information;
if the verification is successful, judging whether the access request is the first request received by the controller;
if yes, the display is controlled to display a first image for correcting the projection picture.
8. The display device of claim 1, wherein the controller is further configured to:
acquiring a second image, analyzing correction points on the second image, and generating analysis data;
correcting the projection picture based on the analysis data to generate a correction effect;
and under the condition that the correction effect meets the preset effect, storing the analysis data as correction data into a preset storage space.
9. The display device of claim 8, wherein the controller is further configured to:
judging whether the preset storage space successfully stores the correction data or not, if so, acquiring the correction data from the preset storage space, and converting the correction data into first coordinate data; if not, acquiring preset coordinate data, and taking the preset coordinate data as the first coordinate data.
10. A projection screen correction method, characterized by being applied to a display device, the method comprising:
displaying a first image for correcting the projection screen;
performing first correction on an acquired second image to generate correction data, wherein the second image is obtained by shooting the first image by a target user;
converting the correction data into first coordinate data and controlling the display to display a projection picture in a first display area determined based on the first coordinate data;
and performing second correction on the first display area to generate second coordinate data, and controlling the display to display a projection picture in the second display area determined based on the second coordinate data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311451371.7A CN117812234A (en) | 2023-11-02 | 2023-11-02 | Display apparatus and projection screen correction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311451371.7A CN117812234A (en) | 2023-11-02 | 2023-11-02 | Display apparatus and projection screen correction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117812234A true CN117812234A (en) | 2024-04-02 |
Family
ID=90425651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311451371.7A Pending CN117812234A (en) | 2023-11-02 | 2023-11-02 | Display apparatus and projection screen correction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117812234A (en) |
-
2023
- 2023-11-02 CN CN202311451371.7A patent/CN117812234A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108513070B (en) | Image processing method, mobile terminal and computer readable storage medium | |
WO2021051995A1 (en) | Photographing method and terminal | |
CN109660723B (en) | Panoramic shooting method and device | |
CN109040524B (en) | Artifact eliminating method and device, storage medium and terminal | |
CN111354434B (en) | Electronic device and method for providing information thereof | |
CN108763998B (en) | Bar code identification method and terminal equipment | |
US9491401B2 (en) | Video call method and electronic device supporting the method | |
EP4096218A1 (en) | Video recording method using plurality of cameras, and device therefor | |
CN111556242A (en) | Screen providing method and electronic device supporting the same | |
CN113076007A (en) | Display screen visual angle adjusting method and device and storage medium | |
EP3979620B1 (en) | Photographing method and terminal | |
CN114513689A (en) | Remote control method, electronic equipment and system | |
KR102653252B1 (en) | Electronic device for providing visualized artificial intelligence service based on information of external object and method for the same | |
US20240214659A1 (en) | Electronic device, photographing method, and photographing apparatus | |
CN111522524A (en) | Presentation control method and device based on conference robot, storage medium and terminal | |
CN109040427B (en) | Split screen processing method and device, storage medium and electronic equipment | |
CN108156386B (en) | Panoramic photographing method and mobile terminal | |
KR20220085834A (en) | Electronic devices and focusing methods | |
CN117812234A (en) | Display apparatus and projection screen correction method | |
US11877057B2 (en) | Electronic device and focusing method | |
US11838637B2 (en) | Video recording method and terminal | |
CN111147745B (en) | Shooting method, shooting device, electronic equipment and storage medium | |
KR20190101802A (en) | Electronic device and method for providing augmented reality object thereof | |
CN110012225B (en) | Image processing method and device and mobile terminal | |
WO2023284072A1 (en) | Method and apparatus for controlling projection device, and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |