CN115474033A - Method for realizing virtual screen for intelligent recognition - Google Patents
Method for realizing virtual screen for intelligent recognition Download PDFInfo
- Publication number
- CN115474033A CN115474033A CN202211135410.8A CN202211135410A CN115474033A CN 115474033 A CN115474033 A CN 115474033A CN 202211135410 A CN202211135410 A CN 202211135410A CN 115474033 A CN115474033 A CN 115474033A
- Authority
- CN
- China
- Prior art keywords
- virtual screen
- infrared image
- infrared
- matching
- projection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013507 mapping Methods 0.000 claims abstract description 24
- 238000007781 pre-processing Methods 0.000 claims abstract description 17
- 230000002441 reversible effect Effects 0.000 claims description 15
- 238000012937 correction Methods 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 13
- 230000005540 biological transmission Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 12
- 238000003709 image segmentation Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000003706 image smoothing Methods 0.000 claims description 3
- 230000002401 inhibitory effect Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000010276 construction Methods 0.000 abstract description 8
- 230000008569 process Effects 0.000 abstract description 6
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 238000000605 extraction Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000002035 prolonged effect Effects 0.000 description 2
- 230000035807 sensation Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/60—Rotation of a whole image or part thereof
-
- G06T5/70—
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Abstract
The invention discloses a method for realizing a virtual screen for intelligent identification, which relates to the technical field of data construction and comprises the following steps: s1, acquiring an infrared image: acquiring all infrared images through the external projection equipment; s2, preprocessing the image, and the method has the beneficial effects that: the invention provides a method for realizing a virtual screen for intelligent identification, which realizes the construction of the virtual screen by the processes of acquiring an infrared image, preprocessing the image, extracting characteristic points, carrying out mode matching on all the characteristic points, calculating the size and rotation of a matching result, reversely mapping the matching result to a projection space of a projector, and projecting display contents and pictures in a corresponding area of the projection space to a correct position.
Description
Technical Field
The invention relates to the technical field of data construction, in particular to a method for realizing a virtual screen for intelligent identification.
Background
With the rapid development of electronic technology, the popularity of the terminal is higher and higher, and the convenience of use of the terminal enables the terminal to be quickly integrated into the daily life of people. But the screen of the terminal is the entity screen of the terminal at present. The screen is the main interface for the user to interact with the terminal, so the design of the terminal is limited to the development of the screen. For example, a terminal may not be designed to be small enough to be used by a user because it requires interaction through a screen. For example, the larger the screen, the more convenient the user can use, especially when watching some videos, but the screen in the mobile terminal (such as a mobile phone) cannot be designed to be very large because the portability of the terminal needs to be considered, so that the terminal limits the screen, which makes it lack of interest.
Patent publication No. CN 106951108A discloses a virtual screen implementation method, which comprises the following steps: controlling a virtual screen projection device to project a virtual screen according to an input virtual screen projection instruction; receiving light information sent by a light sensing receiving device; the light information is received by the light sensation receiving device and reflected by a projection plane where the virtual screen is located; calculating the distance from the virtual screen to the light sensation receiving device according to the light information, and obtaining the three-dimensional coordinate of the virtual screen according to the distance; determining an operation instruction according to the received three-dimensional coordinate transformation corresponding to the virtual screen operation, and executing the operation instruction; the method enables the terminal to have a virtual screen function, and can greatly improve the operability and the practicability of the terminal; the influence of a screen of the terminal on the terminal is reduced; the invention also discloses a virtual screen implementation device, which has the beneficial effects.
However, the above solution still has the following problems: the acquired infrared image is not correspondingly preprocessed, so that the quality of final imaging is influenced, meanwhile, the whole process is complicated, the whole implementation period is prolonged, a large amount of working time is wasted, the operation is difficult to be rapidly started, and the normal use requirement is influenced.
Disclosure of Invention
The invention aims to provide a method for realizing a virtual screen for intelligent identification, which aims to solve the problems that the quality of final imaging is influenced because the acquired infrared image is not correspondingly preprocessed in the background technology, the whole process is complicated, the whole realization period is prolonged, a large amount of working time is wasted, the operation is difficult to be rapidly started, and the normal use is influenced.
In order to achieve the purpose, the invention provides the following technical scheme: the implementation method of the virtual screen for intelligent identification comprises the following steps:
s1, acquiring an infrared image: acquiring all infrared images through the external projection equipment;
s2, image preprocessing: carrying out corresponding preprocessing on the infrared image received in the step S1;
s3, extracting feature points: extracting the characteristics of the infrared image, including gray scale, brightness, resolution and texture;
s4, performing pattern matching on all the feature points: matching all the characteristic points in the step S3, wherein the matching data comprises size, direction and position;
s5, calculating the size and rotation of the matching result: summarizing the matching results obtained in the step S4, and simultaneously carrying out corresponding rotation adjustment on the infrared image until the infrared image is in an optimal combination mode;
s6, reversely mapping the matching result to a projection space of the projector: projecting the matching result in the step S5 into the projector in a reverse mapping mode;
s7, displaying contents in a corresponding area of the projection space: correspondingly displaying the matching result in the step S6 through the projection space in the projector;
s8, projecting the picture to the correct position: and calibrating the designated position according to the projection display result in the step S7, and simultaneously recording the difference value between the projection coordinate finally calibrated and the matching display coordinate.
Preferably, the step S1 of acquiring the infrared image includes the following specific steps:
s101, starting the external projection equipment, wherein the external projection equipment is an infrared camera and starts a receiving key;
and S102, receiving a single infrared image and a plurality of infrared images through a receiving database in the infrared camera, and performing background backup.
Preferably, the preprocessing in step S2 includes the following specific contents:
s201, filtering and denoising the infrared image: filtering and denoising the single infrared image and the plurality of infrared images, and inhibiting background fluctuation;
s202, target enhancement of the infrared image: enhancing the image subjected to the infrared image smoothing treatment by a high-pass filtering method under the conditions that the image is fuzzy and a lot of details are lost;
s203, image segmentation of the infrared image: and separating the infrared image target from the background by a threshold segmentation method.
Preferably, the mode matching in the step S4 includes the following specific steps:
s401, firstly, extracting the characteristics of a plurality of matched infrared images to obtain specific characteristic points of size, direction and position;
s402, finding the matched characteristic point pairs by carrying out similarity measurement;
and S403, obtaining the infrared image space coordinate transformation parameters through the matched characteristic point pairs, and performing the closest infrared image registration through the coordinate transformation parameters.
Preferably, the step S5 of calculating the size and rotation of the matching result includes the following specific steps:
s501, summarizing the multiple groups of matching results;
and S502, performing azimuth rotation self-adjustment on the infrared image according to the corresponding position of the feature point.
Preferably, the step S6 of inversely mapping the matching result to the projection space of the projector includes the following specific steps:
s601, receiving the matching result;
s602, establishing the reverse mapping channel, and establishing coordinates of the starting point and the transmission point;
and S603, carrying out reverse mapping projection to the interior of the transmission point projector.
Preferably, the step S7 includes the following specific steps when displaying the content in the corresponding area of the projection space:
s701, correspondingly displaying the matching result in the step S6 through the projection space;
s702, the display data comprise three-dimensional coordinate specific values;
and S703, self-adjusting the brightness during display and adjusting the brightness through a voice instruction.
Preferably, the step S8 of projecting the picture to the correct position includes the following specific steps:
s801, judging whether the virtual screen is inclined or not according to the three-dimensional coordinates, if so, correcting the edge shape of the virtual screen by using an edge correction algorithm, and if not, not performing subsequent operation;
and S802, when the inclination state is judged, recording the current coordinate position after the correction is finished, and recording the coordinate difference value of the two coordinate positions.
Preferably, the virtual screen is a special handheld paper, and a positioning reference point and an RFID tag are arranged on the paper.
Preferably, the step S3 includes the following specific steps: the externally mounted of virtual screen has control terminal, infrared camera, projector and special paper all with control terminal electric connection.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a method for realizing a virtual screen for intelligent identification, which is characterized in that when in use, the virtual screen is constructed according to the procedures of acquiring an infrared image, preprocessing the image, extracting feature points, matching all the feature points in a mode, calculating the size and rotation of a matching result, reversely mapping the matching result to a projection space of a projector, and projecting display contents and pictures in a corresponding area of the projection space to correct positions.
Drawings
Fig. 1 is a schematic flowchart illustrating the overall steps of a method for implementing a virtual screen for intelligent recognition according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, the present invention provides a technical solution:
the implementation method of the virtual screen for intelligent identification comprises the following steps:
s1, acquiring an infrared image: acquiring all infrared images through the external projection equipment;
the step S1 of acquiring the infrared image comprises the following specific steps:
s101, starting the external projection equipment, wherein the external projection equipment is an infrared camera and starts a receiving key;
s102, receiving a single infrared image and a plurality of infrared images through a receiving database in the infrared camera, and performing background backup;
s2, image preprocessing: carrying out corresponding preprocessing on the infrared image received in the step S1;
the preprocessing in the step S2 includes the following specific contents:
s201, filtering and denoising the infrared image: filtering and denoising the single infrared image and the plurality of infrared images, and inhibiting background fluctuation;
s202, target enhancement of the infrared image: enhancing the image subjected to the infrared image smoothing treatment by a high-pass filtering method under the conditions that the image is fuzzy and many details are lost;
s203, image segmentation of the infrared image: separating the infrared image target and the background by a threshold segmentation method, wherein when a threshold rule is used for image segmentation, all pixel values with the gray values larger than or equal to a certain threshold are judged to belong to the target, and all pixel values with the gray values smaller than the threshold are judged to belong to the background;
s3, extracting feature points: extracting the characteristics of the infrared image, including gray scale, brightness, resolution and texture;
s4, performing pattern matching on all the feature points: matching all the characteristic points in the step S3, wherein the matching data comprises size, direction and position; the mode matching in the step S4 comprises the following specific steps:
s401, firstly, extracting the characteristics of a plurality of matched infrared images to obtain specific characteristic points of size, direction and position;
s402, finding the matched characteristic point pairs by carrying out similarity measurement;
s403, obtaining the infrared image space coordinate transformation parameters through the matched characteristic point pairs, and performing the closest infrared image registration through the coordinate transformation parameters;
s5, calculating the size and rotation of the matching result: summarizing the matching results obtained in the step S4, and simultaneously carrying out corresponding rotation adjustment on the infrared image until the infrared image is in an optimal combination mode;
the step S5 of calculating the size and the rotation of the matching result comprises the following specific steps:
s501, summarizing the multiple groups of matching results;
s502, performing azimuth rotation self-adjustment on the infrared image according to the corresponding position of the feature point;
s6, reversely mapping the matching result to a projection space of the projector: projecting the matching result in the step S5 into the projector in a reverse mapping mode;
the step S6 of reversely mapping the matching result to the projection space of the projector comprises the following specific steps:
s601, receiving the matching result;
s602, establishing the reverse mapping channel, and establishing coordinates of the starting point and the transmission point;
s603, performing reverse mapping projection to the inside of the transmission point projector;
s7, displaying contents in a corresponding area of the projection space: correspondingly displaying the matching result in the step S6 through the projection space in the projector;
the step S7 includes the following specific steps when displaying content in the corresponding area of the projection space:
s701, correspondingly displaying the matching result in the step S6 through the projection space;
s702, the display data comprise three-dimensional coordinate specific values;
s703, self-adjusting the brightness during display and adjusting the brightness through a voice instruction;
s8, projecting the picture to the correct position: and calibrating the designated position according to the projection display result in the step S7, and simultaneously recording the difference value between the projection coordinate finally calibrated and the matching display coordinate.
The step S8 comprises the following specific steps when the picture is projected to the correct position:
s801, judging whether the virtual screen is inclined or not according to the three-dimensional coordinates, if so, correcting the edge shape of the virtual screen by using an edge correction algorithm, and if not, performing subsequent operation;
and S802, when the inclination state is judged, recording the current coordinate position after the correction is finished, and recording the coordinate difference value of the two coordinate positions.
Further, the virtual screen is a special handheld paper, such as a special invitation card, and a positioning reference point and an RFID tag are arranged on the paper.
Furthermore, the externally mounted of virtual screen has control terminal, infrared camera, projector and special paper all with control terminal electric connection.
Example 2
When the implementation method of the virtual screen for intelligent recognition is used for connecting to a special invitation card display:
s1, a worker acquires all infrared images through an infrared camera by starting a control terminal, receives a single infrared image and a plurality of infrared images, performs background backup, and performs corresponding preprocessing on the received infrared images, including filtering and noise reduction processing of the infrared images, target enhancement processing of the infrared images and image segmentation processing of the infrared images;
s2, extracting the characteristics of the infrared images, including gray scale, brightness, resolution and texture, performing mode matching on all characteristic points, wherein the matching data includes size, direction and position, firstly performing characteristic extraction on a plurality of matched infrared images to obtain specific characteristic points of the size, the direction and the position, finding matched characteristic point pairs through similarity measurement, then obtaining space coordinate transformation parameters of the infrared images through the matched characteristic point pairs, and performing closest infrared image registration through the coordinate transformation parameters;
s3, summarizing the obtained matching results, performing azimuth rotation self-adjustment on the infrared image according to the corresponding positions of the feature points until the infrared image is in an optimal combination mode, receiving the matching results, establishing a reverse mapping channel, establishing coordinates of the starting point and the transmission point, and performing reverse mapping projection on the inside of a transmission point projector;
and S4, correspondingly displaying the matching result through a projection space in the projector, wherein the display data comprises a specific value of a three-dimensional coordinate, self-adjusting the brightness during display, adjusting the brightness through a voice instruction, calibrating the specified direction according to the projection display result, recording the difference value between the projection coordinate and the matching display coordinate after final calibration, judging whether the virtual screen is inclined or not according to the three-dimensional coordinate, correcting the edge shape of the virtual screen by using an edge correction algorithm if the virtual screen is in an inclined state, not performing subsequent operation if the virtual screen is in a normal state, recording the coordinate position at the moment when the virtual screen is in the inclined state, recording the coordinate position at the moment after correction, recording the coordinate difference value between the two coordinate positions, and finally selecting a projection place on a handheld special invitation card to complete the construction and realization process of the normal virtual screen.
Example 1
When the implementation method of the virtual screen for intelligent recognition is used for connecting AR technology display:
s1, a worker acquires all infrared images through an infrared camera by starting a control terminal, receives a single infrared image and a plurality of infrared images, performs background backup, and performs corresponding preprocessing on the received infrared images, including filtering and noise reduction processing of the infrared images, target enhancement processing of the infrared images and image segmentation processing of the infrared images;
s2, extracting the characteristics of the infrared images, including gray scale, brightness, resolution and texture, performing mode matching on all characteristic points, wherein matched data comprise size, direction and position, firstly performing characteristic extraction on a plurality of matched infrared images to obtain specific characteristic points of the size, the direction and the position, finding matched characteristic point pairs through similarity measurement, then obtaining space coordinate transformation parameters of the infrared images through the matched characteristic point pairs, and performing infrared image registration which is most similar to that presented by AR technology through the coordinate transformation parameters;
s3, summarizing the obtained AR technology matching results, performing azimuth rotation self-adjustment on the infrared image according to the corresponding positions of the feature points until the infrared image is in an optimal combination mode, receiving the AR technology matching results, establishing a reverse mapping channel, determining the coordinates of the starting point and the transmission point, and performing reverse mapping projection on the inside of a transmission point projector;
s4, the AR technology matching result is correspondingly displayed through a projection space in the projector, display data comprise a three-dimensional coordinate specific value, brightness during display is self-adjusted, meanwhile, regulation can be carried out through a voice instruction, a specified direction is calibrated according to the projection display result, meanwhile, a difference value between a projection coordinate and a matched display coordinate which are finally calibrated is recorded, whether the virtual screen is inclined or not is judged according to the three-dimensional coordinate, if the virtual screen is in an inclined state, the edge shape of the virtual screen is corrected by using an edge correction algorithm, if the virtual screen is in a normal state, subsequent operation is not carried out, when the projection display result is judged to be in the inclined state, the coordinate position at the moment is recorded after correction is finished, the coordinate difference value between the virtual screen and the edge correction is recorded, meanwhile, corresponding check is carried out on started AR technology equipment, finally, a projection place is selected in the specified AR technology equipment and space, the construction and implementation process of the AR technology normal virtual screen can be completed, the virtual information and the virtual screen can be ensured to be in counterpoint matched with real scene information, namely, the position, the size, the motion path and the real scene information must be perfectly matched with the environment, and the real scene can be achieved.
The working principle of the invention is as follows: when the infrared image processing device is used, a worker acquires all infrared images through the infrared camera by starting the control terminal, receives the images including a single infrared image and a plurality of infrared images, performs background backup, and performs corresponding preprocessing on the received infrared images, including filtering and noise reduction processing of the infrared images, target enhancement processing of the infrared images and image segmentation processing of the infrared images; extracting the characteristics of the infrared images, including gray scale, brightness, resolution and texture, performing mode matching on all characteristic points, wherein matched data comprises size, direction and position, firstly performing characteristic extraction on a plurality of matched infrared images to obtain specific characteristic points of the size, the direction and the position, finding matched characteristic point pairs by performing similarity measurement, then obtaining space coordinate transformation parameters of the infrared images by the matched characteristic point pairs, and performing closest infrared image registration by the coordinate transformation parameters; summarizing the obtained matching results, performing azimuth rotation self-adjustment on the infrared image according to the corresponding positions of the feature points until the infrared image reaches an optimal combination mode, receiving the matching results, establishing a reverse mapping channel, establishing coordinates of the starting point and the transmission point, and performing reverse mapping projection on the inside of a transmission point projector; the matching result is correspondingly displayed through a projection space in the projector, the display data comprises a three-dimensional coordinate specific value, the brightness during display is self-adjusted, meanwhile, the adjustment can be carried out through a voice instruction, the specified direction is calibrated according to the projection display result, the difference value between the projection coordinate finally calibrated and the matching display coordinate is recorded, whether the virtual screen is inclined or not is judged according to the three-dimensional coordinate, if the virtual screen is judged to be in an inclined state, the edge shape of the virtual screen is corrected by using an edge correction algorithm, if the virtual screen is judged to be in a normal state, subsequent operation is not carried out, if the virtual screen is judged to be in the inclined state, the coordinate position at the moment is recorded after the correction is finished, the coordinate difference value between the two is recorded, and finally, the projection place is selected on a handheld special invitation to complete the construction and the realization process of the normal virtual screen, the method realizes the construction of the virtual screen by the flow of acquiring the infrared image, preprocessing the image, extracting the characteristic points, matching the pattern of all the characteristic points, calculating the size and the rotation of the matching result, reversely mapping the matching result to the projection space of the projector, and projecting the display content and the picture to the correct position in the corresponding area of the projection space, so that a user can operate the terminal through the virtual screen, the operability and the practicability of the terminal can be greatly improved, the influence of the screen of the terminal on the terminal is reduced, meanwhile, the quality of final imaging is improved by preprocessing the infrared image, the requirement in use is met, the difference value between the projection coordinate finally calibrated and the matching display coordinate is recorded, the factors causing deviation during the construction can be conveniently analyzed, the follow-up can be in place, and the operation period can be shortened, the time is saved, the overall automation level is improved by adopting intelligent management, and the operation is convenient and rapid.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (10)
1. The implementation method of the virtual screen for intelligent identification is characterized by comprising the following steps:
s1, acquiring an infrared image: acquiring all infrared images through the external projection equipment;
s2, image preprocessing: carrying out corresponding preprocessing on the infrared image received in the step S1;
s3, extracting feature points: extracting the characteristics of the infrared image, including gray scale, brightness, resolution and texture;
s4, performing pattern matching on all the feature points: matching all the characteristic points in the step S3, wherein the matching data comprises size, direction and position;
s5, calculating the size and rotation of the matching result: summarizing the matching results obtained in the step S4, and simultaneously carrying out corresponding rotation adjustment on the infrared image until the infrared image is in an optimal combination mode;
s6, reversely mapping the matching result to a projection space of the projector: projecting the matching result in the step S5 into the projector in a reverse mapping mode;
s7, displaying contents in a corresponding area of the projection space: correspondingly displaying the matching result in the step S6 through the projection space in the projector;
s8, projecting the picture to the correct position: and calibrating the designated position according to the projection display result in the step S7, and simultaneously recording the difference value between the projection coordinate finally calibrated and the matching display coordinate.
2. The method for implementing a virtual screen for smart recognition according to claim 1, wherein: the step S1 of acquiring the infrared image comprises the following specific steps:
s101, starting the external projection equipment, wherein the external projection equipment is an infrared camera and starts a receiving key;
and S102, receiving a single infrared image and a plurality of infrared images through a receiving database in the infrared camera, and performing background backup.
3. The method for implementing a virtual screen for smart recognition according to claim 1, wherein: the preprocessing in the step S2 includes the following specific contents:
s201, filtering and denoising the infrared image: filtering and denoising the single infrared image and the plurality of infrared images, and inhibiting background fluctuation;
s202, target enhancement of the infrared image: enhancing the image subjected to the infrared image smoothing treatment by a high-pass filtering method under the conditions that the image is fuzzy and many details are lost;
s203, image segmentation of the infrared image: and separating the infrared image target from the background by a threshold segmentation method.
4. The method for implementing a virtual screen for smart recognition according to claim 1, wherein: the mode matching in the step S4 comprises the following specific steps:
s401, firstly, extracting the characteristics of a plurality of matched infrared images to obtain specific characteristic points of size, direction and position;
s402, finding the matched characteristic point pairs by carrying out similarity measurement;
and S403, obtaining the infrared image space coordinate transformation parameters through the matched characteristic point pairs, and performing the closest infrared image registration through the coordinate transformation parameters.
5. The method for implementing a virtual screen for smart recognition according to claim 4, wherein: the step S5 of calculating the size and the rotation of the matching result comprises the following specific steps:
s501, summarizing the multiple groups of matching results;
and S502, performing azimuth rotation self-adjustment on the infrared image according to the corresponding position of the feature point.
6. The method for implementing a virtual screen for smart recognition according to claim 1, wherein: the step S6 of inversely mapping the matching result to the projection space of the projector comprises the following specific steps:
s601, receiving the matching result;
s602, establishing the reverse mapping channel, and establishing coordinates of the starting point and the transmission point;
and S603, carrying out reverse mapping projection to the inside of the transmission point projector.
7. The method for implementing a virtual screen for smart recognition according to claim 1, wherein: the step S7 includes the following specific steps when displaying content in the corresponding area of the projection space:
s701, correspondingly displaying the matching result in the step S6 through the projection space;
s702, the display data comprise three-dimensional coordinate specific values;
and S703, self-adjusting the brightness during display and simultaneously adjusting the brightness through a voice instruction.
8. The method for implementing a virtual screen for smart recognition according to claim 7, wherein: the step S8 comprises the following specific steps when the picture is projected to the correct position:
s801, judging whether the virtual screen is inclined or not according to the three-dimensional coordinates, if so, correcting the edge shape of the virtual screen by using an edge correction algorithm, and if not, not performing subsequent operation;
and S802, when the inclination state is judged, recording the current coordinate position after the correction is finished, and recording the coordinate difference value of the two coordinate positions.
9. The method for implementing a virtual screen for smart recognition according to claim 1, wherein: the virtual screen is a handheld special paper, and a positioning reference point and an RFID label are arranged on the paper.
10. The method for implementing a virtual screen for smart recognition according to claim 9, wherein: the external mounting of virtual screen has control terminal, infrared camera, projector and special paper all with control terminal electric connection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211135410.8A CN115474033A (en) | 2022-09-19 | 2022-09-19 | Method for realizing virtual screen for intelligent recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211135410.8A CN115474033A (en) | 2022-09-19 | 2022-09-19 | Method for realizing virtual screen for intelligent recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115474033A true CN115474033A (en) | 2022-12-13 |
Family
ID=84332562
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211135410.8A Pending CN115474033A (en) | 2022-09-19 | 2022-09-19 | Method for realizing virtual screen for intelligent recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115474033A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101907954A (en) * | 2010-07-02 | 2010-12-08 | 中国科学院深圳先进技术研究院 | Interactive projection system and interactive projection method |
WO2013084559A1 (en) * | 2011-12-05 | 2013-06-13 | Akiba Tetsuya | Projection system and method |
CN103838437A (en) * | 2014-03-14 | 2014-06-04 | 重庆大学 | Touch positioning control method based on projection image |
CN104021532A (en) * | 2014-06-19 | 2014-09-03 | 电子科技大学 | Image detail enhancement method of infrared image |
CN104185000A (en) * | 2014-08-11 | 2014-12-03 | 惠州华阳通用电子有限公司 | Automatic calibration method for cross screen interactions |
CN105554486A (en) * | 2015-12-22 | 2016-05-04 | Tcl集团股份有限公司 | Projection calibration method and device |
CN106951108A (en) * | 2017-03-27 | 2017-07-14 | 宇龙计算机通信科技(深圳)有限公司 | A kind of virtual screen implementation method and device |
CN110519578A (en) * | 2019-08-16 | 2019-11-29 | 深圳供电局有限公司 | A kind of projecting method and optical projection system |
CN112950685A (en) * | 2021-03-12 | 2021-06-11 | 南京航空航天大学 | Infrared and visible light image registration method, system and storage medium |
-
2022
- 2022-09-19 CN CN202211135410.8A patent/CN115474033A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101907954A (en) * | 2010-07-02 | 2010-12-08 | 中国科学院深圳先进技术研究院 | Interactive projection system and interactive projection method |
WO2013084559A1 (en) * | 2011-12-05 | 2013-06-13 | Akiba Tetsuya | Projection system and method |
CN103838437A (en) * | 2014-03-14 | 2014-06-04 | 重庆大学 | Touch positioning control method based on projection image |
CN104021532A (en) * | 2014-06-19 | 2014-09-03 | 电子科技大学 | Image detail enhancement method of infrared image |
CN104185000A (en) * | 2014-08-11 | 2014-12-03 | 惠州华阳通用电子有限公司 | Automatic calibration method for cross screen interactions |
CN105554486A (en) * | 2015-12-22 | 2016-05-04 | Tcl集团股份有限公司 | Projection calibration method and device |
CN106951108A (en) * | 2017-03-27 | 2017-07-14 | 宇龙计算机通信科技(深圳)有限公司 | A kind of virtual screen implementation method and device |
CN110519578A (en) * | 2019-08-16 | 2019-11-29 | 深圳供电局有限公司 | A kind of projecting method and optical projection system |
CN112950685A (en) * | 2021-03-12 | 2021-06-11 | 南京航空航天大学 | Infrared and visible light image registration method, system and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111046744B (en) | Method and device for detecting attention area, readable storage medium and terminal equipment | |
US10165194B1 (en) | Multi-sensor camera system | |
US9727775B2 (en) | Method and system of curved object recognition using image matching for image processing | |
US9894298B1 (en) | Low light image processing | |
KR102566998B1 (en) | Apparatus and method for determining image sharpness | |
US9414037B1 (en) | Low light image registration | |
CN107613202B (en) | Shooting method and mobile terminal | |
CN106981078B (en) | Sight line correction method and device, intelligent conference terminal and storage medium | |
KR20200023651A (en) | Preview photo blurring method and apparatus and storage medium | |
KR20040107890A (en) | Image slope control method of mobile phone | |
US9058655B2 (en) | Region of interest based image registration | |
CN109691080B (en) | Image shooting method and device and terminal | |
KR102383129B1 (en) | Method for correcting image based on category and recognition rate of objects included image and electronic device for the same | |
Jung et al. | Robust upright adjustment of 360 spherical panoramas | |
CN111612696B (en) | Image stitching method, device, medium and electronic equipment | |
CN109905594B (en) | Method of providing image and electronic device for supporting the same | |
CN113301320B (en) | Image information processing method and device and electronic equipment | |
CN114298902A (en) | Image alignment method and device, electronic equipment and storage medium | |
CN111383254A (en) | Depth information acquisition method and system and terminal equipment | |
US10878577B2 (en) | Method, system and apparatus for segmenting an image of a scene | |
CN114615480A (en) | Projection picture adjusting method, projection picture adjusting device, projection picture adjusting apparatus, storage medium, and program product | |
WO2022087846A1 (en) | Image processing method and apparatus, device, and storage medium | |
WO2024055531A1 (en) | Illuminometer value identification method, electronic device, and storage medium | |
CN107995476B (en) | Image processing method and device | |
CN115474033A (en) | Method for realizing virtual screen for intelligent recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |