CN109698944B - Projection area correction method, projection apparatus, and computer-readable storage medium - Google Patents

Projection area correction method, projection apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN109698944B
CN109698944B CN201710990380.1A CN201710990380A CN109698944B CN 109698944 B CN109698944 B CN 109698944B CN 201710990380 A CN201710990380 A CN 201710990380A CN 109698944 B CN109698944 B CN 109698944B
Authority
CN
China
Prior art keywords
projection
image
feature
vector
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710990380.1A
Other languages
Chinese (zh)
Other versions
CN109698944A (en
Inventor
王丛华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL High-Tech Development Co Ltd
Original Assignee
Shenzhen TCL High-Tech Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL High-Tech Development Co Ltd filed Critical Shenzhen TCL High-Tech Development Co Ltd
Priority to CN201710990380.1A priority Critical patent/CN109698944B/en
Publication of CN109698944A publication Critical patent/CN109698944A/en
Application granted granted Critical
Publication of CN109698944B publication Critical patent/CN109698944B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides a projection area correction method, projection equipment and a computer readable storage medium, wherein the method comprises the following steps: establishing a local feature index database of a projection scene image; when the projection equipment is started, acquiring a projection scene image of the projection equipment, wherein the projection scene image consists of a projection area image and a projection curtain image; calculating a first characteristic vector of the projection area image and a second characteristic vector of the projection curtain image; comparing the first feature vector and the second feature vector with classes in an index database, and identifying first corner point group information of the projection region image and second corner point group information of the projection curtain image; establishing a mapping relation between the projection area image and the projection curtain image according to the first corner point group information and the second corner point group information; and adjusting the projection area of the projection equipment according to the mapping relation, so that the projection area of the projection equipment is superposed with the projection curtain. The invention can automatically calibrate the projection area in the film watching process of the user without manual adjustment of the user.

Description

Projection area correction method, projection apparatus, and computer-readable storage medium
Technical Field
The invention belongs to the technical field of projection, and particularly relates to a projection area correction method, projection equipment and a computer readable storage medium.
Background
With the improvement of living standard, people have more and more demands on large-screen televisions, the development of projection equipment is promoted, the projection equipment also gradually enters the lives of people, the projection equipment can realize a larger screen size than a liquid crystal television, and more shocking multimedia entertainment enjoyment is brought.
Since the projection device projects an image on the screen through the convex lens by the light source, the lens and the screen must be in a proper position to make the projection image of the projection device completely fill the screen area. However, in the actual use process, the light emitted by the projection device overflows out of the curtain due to the improper position relationship between the projection device and the curtain, which affects the visual effect of the user watching the video.
To the above problem, in the prior art, when a user uses a projection device, the projection area of the projection device is projected to an ideal position of a projection screen in a manner of manual adjustment, the operation process is very complicated, and the projection effect of the user after manual adjustment is not satisfactory.
Disclosure of Invention
In view of the above, the present invention provides a projection area correction method, a projection device and a computer-readable storage medium, so as to solve the problems that in the prior art, when a user uses a projection device, the projection area of the projection device is often projected to an ideal position of a projection screen in a manner of manual adjustment, the operation process is very complicated, and the projection effect of the user after manual adjustment is often unsatisfactory.
A first aspect of the present invention provides a projection region correction method, including:
establishing a local feature index database of the projection scene image, wherein the index database comprises indexes of various edge points under various scales;
when the projection equipment is started, acquiring a projection scene image of the projection equipment, wherein the projection scene image is composed of a projection area image and a projection curtain image;
calculating a first characteristic vector of the projection area image and a second characteristic vector of the projection curtain image;
comparing the first feature vector and the second feature vector with indexes of various edge points in the index database under various scales, and identifying first edge point group information of the projection region image and second edge point group information of the projection curtain image;
establishing a mapping relation between the projection region image and the projection curtain image according to the first corner point group information and the second corner point group information;
and adjusting the projection area of the projection equipment according to the mapping relation, so that the projection area of the projection equipment is superposed with the projection curtain.
A second aspect of the invention provides a projection device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the steps of the method according to the first aspect are performed when the computer program is executed by the processor.
A third aspect of the invention provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to the first aspect.
The invention has the beneficial effects that:
according to the invention, the local characteristic index database of the projection scene image is pre-established, in the using process of the projection equipment, the characteristic vectors of the projection area image and the projection curtain image in the projection scene image are extracted and compared with the characteristic vector data in the index database to obtain the corner point information of the projection area and the projection curtain in the projection scene image, and the projection area of the projection equipment is adjusted by referring to the obtained corner point information of the projection area and the projection curtain, so that the projection area of the projection equipment is completely overlapped with the projection curtain, the projection area can be automatically calibrated in the process of viewing by the projection equipment, the manual adjustment of a user is not needed, the clear projection effect of the projection equipment after the projection area is corrected can be ensured, and the viewing experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a topological diagram of a projection device according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an implementation of the method for correcting a projection area according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of projecting a scene image in a particular application scene;
fig. 4 is a schematic flowchart illustrating a specific implementation process of step S201 in the projection area correction method according to another embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating comparison between a detected point and neighboring points in the neighborhood of the detected point in a projection region correction method according to another embodiment of the present invention;
FIG. 6 is a schematic block diagram of a projection device provided by an embodiment of the present invention;
FIG. 7 is a schematic block diagram of an index database unit in a projection device provided by an embodiment of the present invention;
fig. 8 is a schematic block diagram of a projection apparatus provided by another embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
Fig. 1 is a topological diagram of a projection apparatus provided in an embodiment of the present invention when in use. Only the portions related to the present embodiment are shown for convenience of explanation.
Referring to fig. 1, the topology includes a projection device and a curtain, and the projection device can project and display a video image played by itself on the curtain. The projection equipment is provided with a camera device, and a processor for adjusting a projection area according to a projection scene image shot by the camera device is arranged in the projection equipment. In the projection process of the projection equipment, the camera device shoots a projection scene image, the processor processes the projection scene image, and the projection area of the projection equipment is adjusted according to the processing result, so that the projection area of the projection equipment is completely overlapped with the projection curtain. In an embodiment of the present invention, the projection device includes, but is not limited to, a projection television.
Based on the topological diagram shown in fig. 1, the automatic focusing method of the projection apparatus provided by the embodiment of the present invention is described in detail below with reference to specific embodiments:
fig. 2 shows a flow of implementing the projection region correction method provided by the embodiment of the present invention, and in the embodiment shown in fig. 2, the main execution body of the flow is the projection apparatus in fig. 1. The implementation process of the method is detailed as follows:
step S201, establishing a local feature index database of the projection scene image, wherein indexes of various corner points of the index database are under various scales.
In the present embodiment, the projection scene image is composed of a projection area image and a projection screen image. Fig. 3 is a schematic diagram illustrating a projection scene image in a specific application scene.
In this embodiment, the local feature index database of the projection scene image is obtained by pre-training, and the local feature indexes of the projection scene image in different scales are included in the database, and the local feature index in each scale includes 8 types, which are the indexes of the four corner points a ', B', C ', D' of the projection region image and the four corner points A, B, C, D of the projection curtain image.
Step S202, when the projection equipment is started, obtaining a projection scene image of the projection equipment, wherein the projection scene image is composed of a projection area image and a projection curtain image.
In this embodiment, when the projection apparatus is turned on, an image capturing device disposed on the projection apparatus is triggered to capture the projection scene image.
Step S203, calculating a first feature vector of the projection area image and a second feature vector of the projection curtain image.
In this embodiment, the method further includes, before step S203, the step of capturing the projection scene image by the image capturing device as a color image:
the projected scene image is converted into a grayscale image with the same resolution by the formula Gray 0.299+ G0.587 + B0.114.
In this embodiment, local feature extraction is performed on the grayscale image by using a SIFT feature algorithm, so as to obtain a first feature vector of the projection region image and a second feature vector of the projection curtain image in the projection scene image. The first feature vector comprises a direction vector of each feature point in the projection area image, and the second feature vector comprises a direction vector of each feature point in the projection curtain image.
Step S204, comparing the first feature vector and the second feature vector with the indexes of various edge points in the index database under various scales, and identifying first edge point group information of the projection region image and second edge point group information of the projection curtain image.
In this embodiment, since the index database includes local feature indexes of the projection scene image at different scales, the local feature indexes at each scale include 8 types, which are indexes of four corner points a ', B', C ', D' of the projection region image and four corner points A, B, C, D of the projection curtain image. Therefore, after the first feature vector of the projection region image and the second feature vector of the projection curtain image in the projection scene image are obtained, four corner points of the first corner point group information of the projection region image and the second corner point group information A, B, C, D of the projection curtain image in the projection scene image can be identified by comparing the first feature vector and the second feature vector with the indexes of various corner points in the index database under various scales by using a nearest neighbor method. The first corner point group information comprises four corner points A ', B', C 'and D'; the second corner point group information comprises A, B, C, D four corner points.
It should be understood that the schematic diagram of the projection scene shown in fig. 3 is only a preferred implementation example illustrated in the present invention, and in other implementation examples, midpoints on four sides of the projection area and midpoints on four sides of the projection curtain may also be taken as the first corner group information of the projection area and the second corner group information of the projection curtain, respectively.
Step S205, establishing a mapping relationship between the projection region image and the projection curtain image according to the first corner group information and the second corner group information.
In this embodiment, after the corner point information of the projection area and the projection curtain is obtained, the mapping relationships from the corner points a ', B', C ', and D' in the image of the projection area to the four corner points A, B, C, D of the image of the projection curtain are calculated according to the positions of the corner point information, and a conversion matrix for mapping the projection area to the projection curtain is obtained.
And S206, adjusting the projection area of the projection equipment according to the mapping relation, so that the projection area of the projection equipment is overlapped with the projection curtain.
In this embodiment, after the mapping relationship between the projection area image and the projection screen image is obtained, each pixel value in the projection area image may be projected according to the mapping relationship, so that the projection area image is projected onto the screen and is just full of A, B, C, D areas defined by four corner points, thereby meeting the requirement of overlapping the projection area and the projection screen.
As can be seen from the above, in the projection area correction method provided by this embodiment, since the local feature index database of the projection scene image is pre-established, subsequently during the use of the projection device, by extracting the characteristic vectors of the projection area image and the projection curtain image in the projection scene image and comparing the characteristic vectors with the characteristic vector data in the index database, to obtain the information of the corner points of the projection area and the projection screen in the projection scene image, and adjust the projection area of the projection equipment by referring to the obtained information of the corner points of the projection area and the projection screen so that the projection area of the projection equipment is completely coincided with the projection screen, thereby the projection device can automatically calibrate the projection area in the process of viewing the film by the user without manual adjustment of the user, and the projection equipment can be guaranteed to obtain a clear projection effect after the projection area is corrected, and the film watching experience of a user is improved.
Fig. 4 is a schematic diagram illustrating a specific implementation flow of step S201 in a projection region correction method according to another embodiment of the present invention. Referring to fig. 4, in the present embodiment, step S201 may include the following steps:
step S401, acquiring an original projection scene image of the projection device, wherein the original projection scene image is composed of an original projection area image and an original projection curtain image.
And step S402, carrying out gray processing on the original projection area image and the original projection curtain image. The gray scale processing method is the same as the gray scale processing method in step S203 in the previous embodiment, and is not described herein again.
Step S403, respectively extracting a third feature vector of the original projection area image and a fourth feature vector of the original projection curtain image.
In this embodiment, step S430 specifically includes:
constructing a scale space;
detecting characteristic points of the original projection area image and the original projection curtain image in each scale space;
and assigning a 128-dimensional direction parameter to each feature point to form a 128-dimensional feature vector.
Wherein the constructing a scale space comprises:
utilizing Gaussian difference kernels of different scales to be convolved with the original projection scene image to generate Gaussian blurred images of different scales;
and (4) subtracting the Gaussian blurred images of adjacent scales to obtain a Gaussian residual image.
In this embodiment, for an original projection scene image, images at different scales, also called sub-octaves, are created, which are for scale invariance, that is, there can be corresponding feature points at any scale, the scale of the first sub-octave is the size of the original image, and each of the latter sub-octaves is the result of down-sampling the previous sub-octave, that is, 1/4 (half the length and width, respectively) of the original image, to form the next sub-octave.
Wherein the detecting the feature points of the original projection area image and the original projection curtain image in each scale space comprises:
respectively sampling the Gaussian residual images corresponding to the scale spaces, comparing each sampling point with 18 points in 8 neighborhoods, adjacent upper and lower scales of the sampling point, and if the sampling point is the maximum value or the minimum value in 26 points of the scale space layer and the upper and lower layers, considering the sampling point as a characteristic point of the Gaussian residual images under the scale.
In this embodiment, in order to find the feature points in the projection scene image corresponding to each scale space, each sampling point on the image is compared with all its neighboring points to see whether it is larger or smaller than its neighboring points in the image domain and the scale domain. As shown in fig. 5, the middle detection point and its 8 neighboring points of the same scale and 9 × 2 points corresponding to the upper and lower neighboring scales are compared for 26 points to ensure that feature points are detected in both scale space and two-dimensional image space. A point is considered to be a feature point of the image at the scale if the point is the maximum or minimum value in 26 adjacent points in the local layer of the scale space and the upper and lower layers.
Wherein, the assigning a 128-dimensional direction parameter to each feature point and the forming of the 128-dimensional feature vector comprises:
calculating the direction vector of each characteristic point by using the gradient direction distribution characteristics of the neighborhood pixels of the key points, so that the operator has rotation invariance;
each feature point is described by using 16 sub-regions to form 16 seed points, each seed point describes the direction of a vector by using 8 directions, and the magnitude of the vector in each direction is described by using the Euclidean distance between the direction of the vector and the main direction of the vector to form a 128-dimensional feature vector. Preferably, in this embodiment, the 8 directions are 0, pi/4, pi/2, 3 pi/4, pi, 5 pi/4, 3 pi/2, and 2 pi, respectively.
Preferably, in this embodiment, before assigning a 128-dimensional direction parameter to each feature point and forming a 128-dimensional feature vector, the method further includes:
and removing low-contrast characteristic points and unstable edge response points contained in the characteristic points of the original projection area image and the original projection curtain image in each scale space.
In the embodiment, an approximate Harris Corner detector is adopted to remove low-contrast characteristic points and unstable edge response points contained in the original projection scene image, so that matching stability can be enhanced, and the noise resistance can be improved.
Step S404, performing a dimensionality reduction process on the third feature vector and the fourth feature vector by PCA ((Principal Components Analysis), and removing a component corresponding to a minimum feature value in the third feature vector and the fourth feature vector.
In this embodiment, since the information amount of the feature matrix formed by the 128-dimensional feature vector is relatively large, in order to increase the speed of information comparison, the maximum eigenvalue decomposition is performed on the data on the orthogonal basis through the PCA, and the dimension reduction processing is performed, and in the vector matrix, the component corresponding to the minimum eigenvalue is removed, and the component corresponding to the eigenvalue with a relatively large coefficient is retained.
Step S405, respectively mapping the n-dimensional features in the third feature vector and the fourth feature vector to m-dimensional features through SVD (Singular value decomposition), and obtaining a feature descriptor vector of the original projection scene image, where n > m, and n and m are positive integers. In the present embodiment, the n-dimensional features are mapped to the m-dimensional features, which is mainly implemented by SVD orthogonal decomposition.
Step S406, repeatedly executing the above processes, and adding the feature descriptor vectors of all the acquired original projection scene images to the local feature index database of the projection scene images.
Step S407, dividing the feature descriptors in the projection scene image local feature index database into K clusters through a K-means algorithm, wherein K is a positive integer.
Preferably, in this embodiment, step S407 specifically includes:
randomly selecting k seed points from all feature descriptor vectors in the vectors, solving the distances from other vectors to the k seed points respectively, and determining which seed point belongs to which the vector is close to which seed point;
and (3) moving the center of the class where the seed point is located to the center of the point group (namely the particle center) belonging to the seed point, and repeating the steps until the particle center does not need to be moved as a main point, so that k clusters can be obtained.
It should be noted that the implementation manners of other steps in this embodiment are the same as those in the previous embodiment, and therefore, the detailed description thereof is omitted here.
Therefore, it can be seen that the projection area correction method provided by the embodiment can also enable the projection equipment to automatically calibrate the projection area in the process of viewing the image by the user, does not need manual adjustment of the user, can ensure that the projection equipment obtains a clear projection effect after the projection area is corrected, and improves the viewing experience of the user.
Fig. 6 is a schematic diagram of a projection apparatus provided in an embodiment of the present invention. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 6, the present embodiment provides a projection apparatus 6 including:
the index database establishing unit 61 is used for establishing a local feature index database of the projection scene image;
a projection scene image obtaining unit 62, configured to obtain a projection scene image of the projection device when the projection device is turned on, where the projection scene image is composed of a projection area image and a projection curtain image;
a first feature extraction unit 63, configured to calculate a feature vector of the projection area image and a feature vector of the projection curtain image;
a feature matching unit 64, configured to compare the first feature vector and the second feature vector with the classes in the index database, and identify first corner group information of the projection region image and second corner group information of the projection curtain image;
a mapping relation calculating unit 65, configured to establish a mapping relation between the projection region image and the projection curtain image according to the first corner group information and the second corner group information;
and a projection area adjusting unit 66, configured to adjust a projection area of the projection apparatus according to the mapping relationship, so that the projection area of the projection apparatus coincides with the projection curtain.
Preferably, referring to fig. 7, the index database creating unit 61 includes:
an original scene image obtaining unit 611, configured to obtain an original projection scene image of the projection apparatus, where the original projection scene image is composed of an original projection area image and an original projection curtain image;
a gray processing unit 612, configured to perform gray processing on the original projection area image and the original projection curtain image;
a second feature extraction unit 613, configured to extract a third feature vector of the original projection area image and a fourth feature vector of the original projection curtain image, respectively;
a principal component analysis unit 614, configured to perform dimensionality reduction on the third feature vector and the fourth feature vector through PCA principal component analysis, and remove a component corresponding to a minimum feature value in the third feature vector and the fourth feature vector;
a singular value decomposition unit 615, configured to map n-dimensional features in the third feature vector and the fourth feature vector to m-dimensional features through SVD singular value decomposition, respectively, and obtain a feature descriptor vector of the original projection scene image, where n > m, and n and m are positive integers;
a database writing unit 616, configured to repeatedly execute the above processes, and add the feature descriptor vectors of all the acquired original projection scene images to the local feature index database of the projection scene images;
and a feature point clustering unit 617, configured to divide the feature descriptors in the projection scene image local feature index database into K clusters through a K-means algorithm.
Preferably, the second feature extraction unit 613 includes:
a scale space construction unit 6131, configured to construct a scale space;
a feature point detection unit 6132, configured to detect feature points of the original projection area image and the original projection curtain image in each scale space;
the feature point assigning unit 6133 is configured to assign a 128-dimensional direction parameter to each feature point, so as to form a 128-dimensional feature vector.
Preferably, the scale space construction unit 6131 is specifically configured to:
utilizing Gaussian difference kernels of different scales to be convolved with the original projection scene image to generate Gaussian blurred images of different scales;
and (4) subtracting the Gaussian blurred images of adjacent scales to obtain a Gaussian residual image.
Preferably, the feature point detection unit 6132 is specifically configured to:
respectively sampling the Gaussian residual images corresponding to the scale spaces, comparing each sampling point with 18 points in 8 neighborhoods, adjacent upper and lower scales of the sampling point, and if the sampling point is the maximum value or the minimum value in 26 points of the scale space layer and the upper and lower layers, considering the sampling point as a characteristic point of the Gaussian residual images under the scale.
Preferably, the feature point assigning unit 6133 is specifically configured to:
calculating the direction vector of each characteristic point by using the gradient direction distribution characteristics of the neighborhood pixels of the key points, so that the operator has rotation invariance;
each feature point is described by using 16 sub-regions to form 16 seed points, each seed point describes the direction of a vector by using 8 directions, and the magnitude of the vector in each direction is described by using the Euclidean distance between the direction of the vector and the main direction of the vector to form a 128-dimensional feature vector.
Preferably, the 8 directions are respectively 0, pi/4, pi/2, 3 pi/4, pi, 5 pi/4, 3 pi/2 and 2 pi.
Preferably, the second feature vector extraction unit further includes:
and the characteristic point filtering unit is used for removing low-contrast characteristic points and unstable edge response points contained in the characteristic points of the original projection area image and the original projection curtain image in each scale space.
It should be noted that, since each unit of the projection apparatus provided in the embodiment of the present invention is based on the same concept as that of the embodiment of the method of the present invention, the technical effect thereof is the same as that of the embodiment of the method of the present invention, and specific contents thereof may be referred to the description in the embodiment of the method of the present invention, and are not described herein again.
Therefore, it can be seen that the projection apparatus provided by the embodiment of the present invention also establishes the local feature index database of the projected scene image in advance, and then during the use process of the projection apparatus, by extracting the characteristic vectors of the projection area image and the projection curtain image in the projection scene image and comparing the characteristic vectors with the characteristic vector data in the index database, to obtain the information of the corner points of the projection area and the projection screen in the projection scene image, and adjust the projection area of the projection equipment by referring to the obtained information of the corner points of the projection area and the projection screen so that the projection area of the projection equipment is completely coincided with the projection screen, thereby automatically calibrating the projection area in the film watching process of the user without manual adjustment of the user, and can guarantee to obtain clear projection effect after the projection area is rectified, has promoted user's sight shadow experience.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 8 is a schematic diagram of a projection apparatus according to another embodiment of the present invention. As shown in fig. 8, the projection apparatus of this embodiment includes: a processor 80, a memory 81 and a computer program 82 stored in said memory 81 and executable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the various method embodiments described above, such as the steps 201 to 206 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 61 to 66 shown in fig. 6.
Illustratively, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing certain functions that describe the execution of the computer program 82 in the projection device. For example, the computer program 82 may be divided into the index database creating unit 61, the projection scene image acquiring unit 62, the first feature extracting unit 63, the feature matching unit 64, the mapping relation calculating unit 65, and the projection region adjusting unit 66, and the specific functions of each unit are as follows:
the index database establishing unit 61 is used for establishing a local feature index database of the projection scene image;
a projection scene image obtaining unit 62, configured to obtain a projection scene image of the projection device when the projection device is turned on, where the projection scene image is composed of a projection area image and a projection curtain image;
a first feature extraction unit 63, configured to calculate a feature vector of the projection area image and a feature vector of the projection curtain image;
a feature matching unit 64, configured to compare the first feature vector and the second feature vector with the classes in the index database, and identify first corner group information of the projection region image and second corner group information of the projection curtain image;
a mapping relation calculating unit 65, configured to establish a mapping relation between the projection region image and the projection curtain image according to the first corner group information and the second corner group information;
and a projection area adjusting unit 66, configured to adjust a projection area of the projection apparatus according to the mapping relationship, so that the projection area of the projection apparatus coincides with the projection curtain.
The projection device may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of a projection device and does not constitute a limitation of the terminal device 8, and may include more or less components than shown, or combine certain components, or different components, e.g., the terminal may also include input output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the projection device, such as a hard disk or a memory of the projection device. The memory 81 may also be an external storage device of the projection apparatus, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the projection apparatus. Further, the memory 81 may also include both an internal storage unit and an external storage device of the projection device. The memory 81 is used for storing the computer program and other programs and data required by the terminal. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (9)

1. A projection region correction method, comprising:
establishing a local feature index database of the projection scene image, wherein the index database comprises indexes of various edge points under various scales;
acquiring a projection scene image of the projection equipment, wherein the projection scene image is composed of a projection area image and a projection curtain image;
calculating a first characteristic vector of the projection area image and a second characteristic vector of the projection curtain image;
comparing the first feature vector and the second feature vector with indexes of various edge points in the index database under various scales, and identifying first edge point group information of the projection region image and second edge point group information of the projection curtain image;
establishing a mapping relation between each pixel point of the projection area image and each pixel point of the projection curtain image according to the first corner point group information and the second corner point group information;
adjusting the projection area of the projection equipment according to the mapping relation, so that the projection area of the projection equipment is overlapped with the projection curtain;
the step of establishing a local feature index database of the projection scene image comprises the following steps:
acquiring an original projection scene image of the projection equipment, wherein the original projection scene image is composed of an original projection area image and an original projection curtain image;
carrying out gray level processing on the original projection area image and the original projection curtain image;
respectively extracting a third feature vector of the original projection area image and a fourth feature vector of the original projection curtain image;
performing dimensionality reduction processing on the third feature vector and the fourth feature vector through principal component analysis, and removing a component corresponding to a minimum feature value in the third feature vector and the fourth feature vector;
respectively mapping n-dimensional features in the third feature vector and the fourth feature vector to m-dimensional features through singular value decomposition, and obtaining a feature descriptor vector of the original projection scene image, wherein n > m, and both n and m are positive integers;
repeatedly executing the process, and adding the feature descriptor vectors of all the obtained original projection scene images into the local feature index database of the projection scene images;
dividing the feature descriptors in the local feature index database of the projection scene image into K clusters through a K-means algorithm, wherein K is a positive integer.
2. The projection region correction method according to claim 1, wherein the extracting of the third feature vector of the original projection region image and the fourth feature vector of the original projection screen image, respectively, comprises:
constructing a scale space;
detecting characteristic points of the original projection area image and the original projection curtain image in each scale space;
and assigning a 128-dimensional direction parameter to each feature point to form a 128-dimensional feature vector.
3. The projection region correction method according to claim 2, wherein the constructing of the scale space includes:
utilizing Gaussian difference kernels of different scales to be convolved with the original projection scene image to generate Gaussian blurred images of different scales;
and (4) subtracting the Gaussian blurred images of adjacent scales to obtain a Gaussian residual image.
4. The projection region correction method according to claim 3, wherein the detecting of the feature points of the original projection region image and the original projection curtain image in the respective scale spaces includes:
respectively sampling the Gaussian residual images corresponding to the scale spaces, comparing each sampling point with 18 points in 8 neighborhoods, adjacent upper and lower scales of the sampling point, and if the sampling point is the maximum value or the minimum value in 26 points of the scale space layer and the upper and lower layers, considering the sampling point as a characteristic point of the Gaussian residual images under the scale.
5. The projection region correction method according to claim 2, wherein the assigning a 128-dimensional direction parameter to each feature point, and the forming of the 128-dimensional feature vector comprises:
calculating the direction vector of each characteristic point by using the gradient direction distribution characteristics of the neighborhood pixels of the key points, so that the operator has rotation invariance;
each feature point is described by using 16 sub-regions to form 16 seed points, each seed point describes the direction of a vector by using 8 directions, and the magnitude of the vector in each direction is described by using the Euclidean distance between the direction of the vector and the main direction of the vector to form a 128-dimensional feature vector.
6. The projection area correction method according to claim 5, wherein the 8 directions are 0, pi/4, pi/2, 3 pi/4, pi, 5 pi/4, 3 pi/2, 2 pi, respectively.
7. The projection region correction method according to claim 2, wherein before said assigning a 128-dimensional direction parameter to each feature point, forming a 128-dimensional feature vector, further comprises:
and removing low-contrast characteristic points and unstable edge response points contained in the characteristic points of the original projection area image and the original projection curtain image in each scale space.
8. A projection device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 7 are implemented when the computer program is executed by the processor.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN201710990380.1A 2017-10-23 2017-10-23 Projection area correction method, projection apparatus, and computer-readable storage medium Active CN109698944B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710990380.1A CN109698944B (en) 2017-10-23 2017-10-23 Projection area correction method, projection apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710990380.1A CN109698944B (en) 2017-10-23 2017-10-23 Projection area correction method, projection apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN109698944A CN109698944A (en) 2019-04-30
CN109698944B true CN109698944B (en) 2021-04-02

Family

ID=66225799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710990380.1A Active CN109698944B (en) 2017-10-23 2017-10-23 Projection area correction method, projection apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN109698944B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458909A (en) * 2019-08-05 2019-11-15 薄涛 Handle method, server, tutoring system and the medium of projected image
CN110475108A (en) * 2019-08-05 2019-11-19 薄涛 Projected picture correcting method, terminal device, system and storage medium
CN110784699B (en) * 2019-11-01 2021-06-25 成都极米科技股份有限公司 Projection processing method, projection processing device, projector and readable storage medium
CN114520895B (en) * 2020-11-18 2022-11-15 成都极米科技股份有限公司 Projection control method, device, projection optical machine and readable storage medium
CN112598728B (en) * 2020-12-23 2024-02-13 极米科技股份有限公司 Projector attitude estimation, trapezoidal correction method and device, projector and medium
CN112689136B (en) * 2021-03-19 2021-07-02 深圳市火乐科技发展有限公司 Projection image adjusting method and device, storage medium and electronic equipment
CN113628282A (en) * 2021-08-06 2021-11-09 深圳市道通科技股份有限公司 Pattern projection correction apparatus, method, and computer-readable storage medium
CN113873208B (en) * 2021-09-16 2023-07-25 峰米(北京)科技有限公司 Gamma curve adjusting method and equipment for projection equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1701603A (en) * 2003-08-06 2005-11-23 三菱电机株式会社 Method and system for determining correspondence between locations on display surface having arbitrary shape and pixels in output image of projector
CN105704466A (en) * 2016-01-29 2016-06-22 北京小鸟科技发展有限责任公司 A DLP projection method, a DLP projection apparatus and a DLP projector
CN106101677A (en) * 2016-08-17 2016-11-09 郑崧 Projection Image Adjusting system and method for adjustment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1701603A (en) * 2003-08-06 2005-11-23 三菱电机株式会社 Method and system for determining correspondence between locations on display surface having arbitrary shape and pixels in output image of projector
CN105704466A (en) * 2016-01-29 2016-06-22 北京小鸟科技发展有限责任公司 A DLP projection method, a DLP projection apparatus and a DLP projector
CN106101677A (en) * 2016-08-17 2016-11-09 郑崧 Projection Image Adjusting system and method for adjustment

Also Published As

Publication number Publication date
CN109698944A (en) 2019-04-30

Similar Documents

Publication Publication Date Title
CN109698944B (en) Projection area correction method, projection apparatus, and computer-readable storage medium
CN106469431B (en) Image processing apparatus
Krig Computer vision metrics: Survey, taxonomy, and analysis
US11055826B2 (en) Method and apparatus for image processing
KR101121034B1 (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
AU2011250829B2 (en) Image processing apparatus, image processing method, and program
US10970821B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
CN107077725A (en) Data processing equipment, imaging device and data processing method
EP2561467A1 (en) Daisy descriptor generation from precomputed scale - space
KR20150116833A (en) Image processor with edge-preserving noise suppression functionality
CN108986197B (en) 3D skeleton line construction method and device
CN107240082B (en) Splicing line optimization method and equipment
CN110147708B (en) Image data processing method and related device
CN110782424B (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN111131688B (en) Image processing method and device and mobile terminal
CN109903265B (en) Method and system for setting detection threshold value of image change area and electronic device thereof
CN111598777A (en) Sky cloud image processing method, computer device and readable storage medium
CN111383254A (en) Depth information acquisition method and system and terminal equipment
CN113744256A (en) Depth map hole filling method and device, server and readable storage medium
WO2022267939A1 (en) Image processing method and apparatus, and computer-readable storage medium
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
US9319666B1 (en) Detecting control points for camera calibration
CN106997366B (en) Database construction method, augmented reality fusion tracking method and terminal equipment
CN111353325A (en) Key point detection model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant