CN114286064A - Real-time focusing method, device, system and computer readable storage medium - Google Patents

Real-time focusing method, device, system and computer readable storage medium Download PDF

Info

Publication number
CN114286064A
CN114286064A CN202010980932.2A CN202010980932A CN114286064A CN 114286064 A CN114286064 A CN 114286064A CN 202010980932 A CN202010980932 A CN 202010980932A CN 114286064 A CN114286064 A CN 114286064A
Authority
CN
China
Prior art keywords
video source
source data
focusing
real
shot image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010980932.2A
Other languages
Chinese (zh)
Inventor
弓殷强
邓岳慈
余新
李屹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Appotronics Corp Ltd
Original Assignee
Appotronics Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appotronics Corp Ltd filed Critical Appotronics Corp Ltd
Priority to CN202010980932.2A priority Critical patent/CN114286064A/en
Priority to PCT/CN2021/116773 priority patent/WO2022057670A1/en
Publication of CN114286064A publication Critical patent/CN114286064A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Abstract

The application discloses a real-time focusing method, a device, a system and a computer readable storage medium, wherein the method comprises the following steps: acquiring video source data and a shot image; processing the video source data and the shot image to obtain characteristic information of the video source data and characteristic information of the shot image; and generating a control instruction based on the characteristic information of the video source data and the characteristic information of the shot image, and sending the control instruction to the motor so that the motor drives the projection lens to move to realize focusing. By means of the mode, automatic real-time focusing can be achieved.

Description

Real-time focusing method, device, system and computer readable storage medium
Technical Field
The present application relates to the field of projection technologies, and in particular, to a real-time focusing method, device, system, and computer-readable storage medium.
Background
The auto-focusing is a very important function of the projector, so that the projector can adapt to more use scenes, and the specific requirements can be subdivided into the following two aspects: resetting the focal plane after the first installation or displacement; under the state that the projector is opened for a long time, temperature change can occur due to heat generation, and the phenomenon of heat defocusing can occur to each part of the lens and the light path under the temperature change; in addition, under specific working conditions, the vibration may generate dislocation, thereby causing blurring, and such real-time blurring phenomenon cannot be corrected by the current automatic focusing technology. The traditional focusing scheme needs to pause display and print a specific reference picture for focusing, and can not realize non-inductive focusing; in addition, the angle of the focusing motor can be adjusted by arranging a motion sensor in the traditional focusing scheme, however, due to the fact that errors and noises of the motion sensor are large, the motion of the projector and the relative relation between the projector and the screen are complex, a good effect cannot be obtained, and the blur caused by thermal defocusing cannot be adjusted.
Disclosure of Invention
The application provides a real-time focusing method, a real-time focusing device, a real-time focusing system and a computer readable storage medium, which can realize automatic real-time focusing.
In order to solve the above technical problem, a technical solution adopted by the present application is to provide a real-time focusing method, including: acquiring video source data and a shot image; processing the video source data and the shot image to obtain characteristic information of the video source data and characteristic information of the shot image; and generating a control instruction based on the characteristic information of the video source data and the characteristic information of the shot image, and sending the control instruction to the motor so that the motor drives the projection lens to move to realize focusing.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a real-time focusing apparatus, which includes a memory and a processor connected to each other, wherein the memory is used for storing a computer program, and the computer program is used for implementing the real-time focusing method when being executed by the processor.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a projection system, including: the real-time focusing device is used for receiving video source data; the projection device is connected with the real-time focusing device and used for receiving the video source data sent by the real-time focusing device and performing projection display, wherein the projection device comprises a projection lens and a motor which are connected with each other; the camera device is connected with the real-time focusing device and is used for shooting the projection display image displayed by the projection device to obtain a shot image corresponding to the video source data; the real-time focusing device is also used for processing the video source data and the shot image to obtain the characteristic information of the video source data and the characteristic information of the shot image; and generating a control instruction based on the characteristic information of the video source data and the characteristic information of the shot image, and sending the control instruction to the motor so that the motor drives the projection lens to move to realize focusing.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a computer-readable storage medium for storing a computer program, wherein the computer program is used for implementing the real-time focusing method when being executed by a processor.
Through the scheme, the beneficial effects of the application are that: storing the received video source data, and controlling a projection device to display the video source data so as to obtain a shot image corresponding to the video source data; then processing the video source data and the shot image to obtain corresponding characteristic information; determining the adjustment direction of the projection lens by utilizing the characteristic information of the video source data and the characteristic information of the shot image; and then generating a corresponding control instruction according to the adjusting direction to control the motor to drive the projection lens to move, so that the focusing can be carried out while the picture is played, manual focusing is not needed, real-time focusing and non-inductive focusing can be realized, and the heat defocusing can be corrected.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flowchart illustrating an embodiment of a real-time focusing method provided in the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a real-time focusing method provided in the present application;
FIG. 3 is a schematic flow chart of step 131 in the embodiment shown in FIG. 2;
FIG. 4 is a schematic view of an image taken in the embodiment shown in FIG. 2;
FIG. 5 is a schematic structural diagram of an embodiment of a real-time focusing apparatus provided in the present application;
FIG. 6 is a schematic diagram of an embodiment of a projection system provided in the present application;
FIG. 7 is a schematic diagram of the connection between the video source and the real-time focusing device in the embodiment shown in FIG. 6;
FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a real-time focusing method, which comprises the following steps:
acquiring video source data and a shot image;
processing the video source data and the shot image to obtain characteristic information of the video source data and characteristic information of the shot image;
and generating a control instruction based on the characteristic information of the video source data and the characteristic information of the shot image, and sending the control instruction to the motor so that the motor drives the projection lens to move to realize focusing.
The following description is given with reference to specific examples:
referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a real-time focusing method provided in the present application, the method including:
step 11: video source data and shot images are acquired.
The video source data can be pixel values of images, different video source data can be received in different time periods, and the video source data can be acquired online in real time, received from other devices or read from storage equipment; for the convenience of comparison with the shot image, the video source data can be stored in the real-time focusing device. It can be understood that the amount of video source data stored in the real-time focusing device is relatively small, and the video source data can be cleaned regularly, and video source signals within a preset time before and after the current video source data can be acquired in real time, for example, video source data within 5 seconds can be acquired and buffered.
Furthermore, video source data sent by a video source can be received, then the video source data are sent to the projection device, the projection device is controlled to display the video source data, and a projection display image is generated; after the projection device starts to display, a synchronous shooting signal can be sent to the camera device to control the camera device to shoot a projection display image on the projection screen, so that a shot image corresponding to the video source data is obtained, and the shot image and the video source data have at least partial same contents; in addition, the real-time focusing device can take out the same frame of video source data which is buffered so as to compare the video source data with the shot image.
In other embodiments, the received video source data may be buffered completely, and then a set of video source data may be retrieved from the stored video source data for projection display.
It will be appreciated that the acquisition of video source data may be performed every frame, or may be performed every several frames or several seconds.
Step 12: and processing the video source data and the shot image to obtain the characteristic information of the video source data and the characteristic information of the shot image.
Whether focusing is needed or not can be judged according to the fuzzy degree of the shot image, but if the judgment is only carried out according to the fuzzy degree of the shot image, whether the display fuzzy is caused by the fuzzy caused by virtual focus or not is difficult to judge, so that the shot image can be compared with the video source data; specifically, only a part of the area of the shot image may be compared with a part of the data of the video source data, that is, a local comparison method is adopted, or the whole area of the shot image may be compared with the whole data of the video source data, that is, a global comparison method is adopted; if the video source data is clear and the shot image is clear, no focusing adjustment is needed; if the video source data is clear and the shot image is fuzzy, refocusing is needed; further, after the shot image is obtained, the video source data and the shot image can be analyzed and processed by using an image processing method, and the characteristic information of the video source data and the characteristic information of the shot image are respectively extracted.
It will be appreciated that blur is relative to sharpness, and that if the difference in sharpness of the video source data and the captured image is within a range, it can be determined that both are equally sharp or blurred, otherwise it can be determined that the captured image is blurred.
In a specific embodiment, the feature information includes feature points, and a specific image processing algorithm or a depth learning algorithm can be used to process the video source data and the shot image to obtain a plurality of feature points in the video source data and a plurality of feature points in the shot image respectively; for example, feature points in the image may be extracted using a feature extraction algorithm.
Step 13: and generating a control instruction based on the characteristic information of the video source data and the characteristic information of the shot image, and sending the control instruction to the motor so that the motor drives the projection lens to move to realize focusing.
The projection device comprises a projection lens and a motor which are connected with each other, after the characteristic information is generated, a control instruction can be generated by utilizing the characteristic information, and then the control instruction is sent to the motor, so that the motor drives the projection lens to move, the focus and the focusing position are adjusted, and the focusing is realized.
In a specific embodiment, taking the adjustment of the position of the projection lens as an example for explanation, the method specifically includes the following steps:
step 131: and determining a corresponding focusing area based on the characteristic information of the video source data and the characteristic information of the shot image.
After extracting the feature points of the video source data and the feature points of the shot image, the feature points can be used to determine a focusing area, and the processing is performed in a manner shown in fig. 3, which specifically includes the following steps:
step 1311: and matching the plurality of characteristic points in the video source data with the plurality of characteristic points in the shot image to determine corresponding matching areas.
The comparison is to compare the captured image with the source video data, but due to the influence of the position or the angle of view of the imaging device, when the image displayed by the projection device is captured by the imaging device, the captured image (i.e. the captured image) and the actual projection image (i.e. the projection display image) may not be identical, and at this time, the captured image cannot represent the actual projection image, so as to facilitate the comparison, the video source data and the captured image may be matched first; furthermore, the local parts of the video source data and the shot image can be compared, the video source data corresponding to at least partial area of the shot image is found, and the corresponding relation between the video source data and the shot image is established; for example, as shown in fig. 4, the shot image is denoted as a, the local area in the shot image a is denoted as B, the pixel values in the local area B are matched with the video source data, the pixel values in the video source data that have the smallest difference from the pixel values in the local area B are found, and the area formed by these pixel values is the area matched with the local area B, i.e., the matching area.
Further, before matching, distortion of the shot image is removed, then corner points or other feature points of the image are found, and according to the position corresponding relation of the corner points or the feature points in the video source data and the shot image, a matching area of the video source data and the shot image is found.
In a specific embodiment, in order to prevent the ambient light from affecting the image processing process, the ambient light reduction process may be performed after the captured image is undistorted; in the case of weak ambient light or uniform ambient light, the ambient light reduction process may not be performed; or, the difference between two different frames may be used to implement ambient light subtraction, and if the difference between two frames is smaller than a certain set threshold, the ambient light subtraction may not be performed, for example, the ambient light subtraction is not performed when a still picture is displayed.
Step 1312: and processing the pixels of each matching area respectively to obtain a plurality of focusing characteristic points.
After matching, in order to further decide which local part of the image to use for comparison, one or more specific areas can be selected from the matching area, but if no sharper image exists in the found areas, some special algorithms are needed for processing, the requirements on the algorithms are higher, and the processing time is longer, so that the sharpest area in the matching area, namely a high spatial frequency area, which includes a plurality of focusing feature points, can be found.
Further, a gradient operator or a laplacian operator or other edge extraction operators can be used for processing the matching region in the video source data and the matching region in the shot image, for example, pixels with violent change in the matching region are extracted to obtain a plurality of corresponding high-frequency pixel points; specifically, the position where the pixel value changes drastically corresponds to a high-frequency signal region (i.e., a high spatial frequency region) in the image, such as an edge; the position where the pixel value change is not large is corresponding to a low-frequency signal area, such as a large color block area; since the high frequency signal region is generally an edge or a contour of an image and can represent contour information of the image, the high frequency signal region is selected to determine the focusing condition.
Then, the pixel change condition of each matching area can be counted, screening is carried out through a set threshold value, and an area rich in pixels with violent change is used as a high spatial frequency area, namely whether the pixel value of each high-frequency pixel point is larger than a preset pixel value is judged; if the pixel value of the high-frequency pixel point is larger than the preset pixel value, taking the high-frequency pixel point as a focusing characteristic point; and if the pixel value of the high-frequency pixel point is less than or equal to the preset pixel value, discarding the high-frequency pixel point.
Step 1313: and respectively connecting adjacent focusing characteristic points in the plurality of focusing characteristic points to obtain corresponding focusing areas.
After the focusing feature points are obtained, adjacent focusing feature points in a plurality of focusing features corresponding to the video source data can be connected, specifically, the distance between each focusing feature point and other focusing feature points can be calculated, two focusing feature points with the shortest distance are used as adjacent focusing feature points, and then the adjacent focusing feature points are connected by straight lines to obtain a closed area, namely a focusing area of the video source data; and processing a plurality of focusing characteristic points corresponding to the shot images according to the same method to obtain focusing areas of the shot images, thereby realizing the purpose of finding out approximate focusing areas from the two images.
It will be appreciated that multiple disjoint high spatial frequency regions may be generated simultaneously in order to achieve more accurate focusing results.
Step 132: acquiring the definition of a focusing area in video source data, recording the definition as a first definition, acquiring the definition of the focusing area in a shot image, and recording the definition as a second definition; and determining the adjusting direction of the projection lens based on the first definition and the second definition.
After the focusing area is obtained, a first definition and a second definition can be calculated by using an image definition evaluation function, and then the difference value of the first definition and the second definition is calculated, wherein the difference value of the definitions can be used for measuring the focusing condition and the defocusing distance, and the larger the difference value of the first definition and the second definition is, the larger the defocusing distance is; specifically, a parameter indicating high-frequency information corresponding to each high-spatial-frequency region may be calculated at the same time, and the parameter is taken as the definition, for example, the high-frequency part of the spatial frequency spectrum, the square sum of the gradient values, or the sum of the absolute values of laplacian; in addition, in order to ensure comparison between different images, the difference between the two images can be normalized according to parameters such as area or total brightness; if a plurality of high spatial frequency regions exist, the average values can be calculated after the normalization according to the areas respectively, and the final result is obtained.
In a specific embodiment, in order to improve the stability of the system and obtain a stable defocus distance, the final defocus distance may be filtered, for example, an average filtering or an average filtering that removes a maximum value and a minimum value may be used.
Then, whether the difference value between the first definition and the second definition is larger than a preset defocusing threshold value or not can be judged; if the difference value between the first definition and the second definition is greater than the preset defocusing threshold, performing feedback adjustment, controlling the projection lens to move a preset distance in any direction, and continuously judging the defocusing condition, namely returning to the step 11, and if the defocusing distance is increased after the last adjustment is performed, indicating that the defocusing direction is opposite to the previous adjustment direction; if the defocusing distance is reduced after the last adjustment is carried out, the defocusing direction is the same as the previous adjustment direction; after the defocusing direction is judged, single fine adjustment or multiple feedback fine adjustment can be carried out until the difference value between the first definition and the second definition is smaller than or equal to a preset defocusing threshold, and the preset distance is smaller than the preset defocusing threshold, namely the threshold of the defocusing degree is larger than the defocusing caused by moving the preset distance, so that the situation that the optimal focusing position is crossed after the preset distance is adjusted is prevented; and if the difference value of the first definition and the second definition is less than or equal to the preset defocusing threshold, determining that focusing is successful without adjustment.
In the embodiment, the matching area of the video source data and the shot image can be calculated according to the obtained feature points, and corresponding focusing feature points can be obtained by processing the pixels of the matching area; based on the distribution of the two groups of focusing characteristic points, an approximate focusing area can be obtained by connecting adjacent focusing characteristic points; according to the obtained focusing area, the definition of the focusing area can be respectively calculated by using an image definition evaluation function, whether the current focusing is accurate or not can be judged according to the difference of the two definitions, if the focusing is not accurate, feedback adjustment is started, and how to finish the focusing can be analyzed according to the extracted focusing area, so that automatic focusing is realized.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a real-time focusing apparatus provided in the present application, the real-time focusing apparatus 50 includes a memory 51 and a processor 52 connected to each other, the memory 51 is used for storing a computer program, and the computer program is used for implementing the real-time focusing method when being executed by the processor 52.
The embodiment provides a reliable real-time focusing device 50, which can solve the problem that when the defocusing condition of the projection device occurs, a user does not need to focus through manual judgment and control, but can automatically judge the defocusing condition, and can automatically focus in real time when displaying any video image, so that the user experience is not influenced, and the defocusing caused by reasons such as thermal defocusing, vibration and the like can be corrected in real time.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a projection system provided in the present application, the projection system including: the real-time focusing device 61, the projection device 62 and the image capturing device 63, the projection device 62 includes a projection lens 621 and a projection screen 622, and the image capturing device 63 may be a camera.
The internal and external parameters of the projection lens 621 and the camera 63 may be calibrated before the real-time focusing process starts, and the calibration parameters are stored in the real-time focusing device 61, where the calibration parameters include: distortion parameters of the projection device 62, distortion parameters of the imaging device 63, relative positions and relative directions of the projection device 62 and the imaging device 63, and the like.
After the calibration is finished, the corresponding relationship between the projection pixel and the camera pixel can be roughly confirmed, but since the distance of the projection screen 622 is not determined, the corresponding relationship between the projection area and the photographing area still cannot be determined after the internal and external parameters are calibrated, that is, coordinates of four corner points of the projection area on the camera film, and how to determine the corresponding relationship will be described below.
For the occasion with fixed installation or a fixed use scene, after the installation is finished, the pictures of a black field and a white field are subtracted to obtain a projection area, and the coordinates of four angular points are obtained by utilizing the traditional angular point detection or linear detection algorithm (such as Hough transform) and a clustering algorithm (such as a K-means clustering algorithm); after acquiring the coordinates of the four corner points of the projection lens 621 and the camera, the corresponding relation between the coordinates of the projection pixel and the coordinates of the camera pixel can be acquired by four-point transformation after distortion removal; or subtracting black and white grid pictures with opposite specific horizontal and vertical quantities, and then obtaining more accurate alignment relation by utilizing corner detection, linear detection or clustering algorithm.
For a situation that a usage scene is more complicated, for example, when the relative position of the projection lens 621 and the projection screen 622 may change at any time, in order to implement the non-inductive focusing, down-sampling and performing a difference on two adjacent frames of video source data may be performed, the difference value of each region is counted, the region with the larger difference between the two frames is used as a feature region, the difference is performed on the actual display frames corresponding to the two frames of video source data, and the position of the feature region in the video source data in the actual display frames (for example, a cross-correlation function using a sliding window) is detected, so that the corresponding relationship between the projection pixel and the camera pixel may be determined.
After the corresponding relationship between the projection pixel and the camera pixel is determined, the distance between the projection lens 621 and the projection screen 622 is determined, and the parameter of the focusing position can be roughly determined according to the design parameters of the projection lens 621; however, due to thermal defocus and the like, the precise focusing parameters cannot be determined, so the real-time focusing device 61 provided by the present embodiment can be used for adjustment to achieve focusing.
The real-time focusing device 61 is used for acquiring video source data; the projection device 62 is connected to the real-time focusing device 61, and is configured to receive the video source data sent by the real-time focusing device 61, perform projection display, and form a projection display image; specifically, the focusing position of the projection lens 621 is controlled by the real-time focusing device 61, and the projection lens 621 can project the video source data sent by the real-time focusing device 61 in real time to form a shot image on the projection screen 622.
The image pickup device 63 is connected to the real-time focusing device 61, and is configured to capture a projection display image displayed by the projection device 62 to obtain a captured image corresponding to video source data; specifically, the image capturing device 63 may capture the projection display image generated by the projection device 62 after receiving the synchronous capture signal sent by the real-time focusing device 61, obtain a captured image, and send the captured image back to the real-time focusing device 61.
The real-time focusing device 61 is further configured to process the video source data and the captured image to obtain feature information of the video source data and feature information of the captured image; generating a control command based on the characteristic information of the video source data and the characteristic information of the captured image, and sending the control command to a motor (not shown in the figure) so that the motor drives the projection lens 621 to move, thereby realizing focusing; further, the position of the projection lens 621 or the parameter of the projection lens 621 can be adjusted, and the embodiment will be described by taking the adjustment of the position of the projection lens 621 as an example.
In a specific embodiment, as shown in fig. 6, the projection system further includes a video source 64, the video source 64 is configured to send video source data to the synchronization module 6111; the real-time focusing apparatus 61 comprises a processor 611 and a memory 612 connected to each other, the processor 611 is used for receiving video source data; the memory 612 is configured to receive the video source data sent by the processor 611 and store the video source data, and the processor 611 may further obtain the video source data from the memory 612 and receive the captured image sent by the image capturing device 63.
The synchronization module 6111 is connected to the memory 612 and the video source 64, and is configured to receive video source data and a captured image sent by the image capturing device 63, and store the video source data in the memory 612.
The feature extraction module 6112 is connected to the synchronization module 6111, and is configured to process the received video source data and the captured image to obtain a plurality of feature points in the video source data and a plurality of feature points in the captured image.
The focusing decision module 6113 is connected to the feature extraction module 6112, and is configured to determine an adjustment direction of the projection lens 621 based on a plurality of feature points of the video source data and a plurality of feature points of the captured image, generate a control instruction corresponding to the adjustment direction, and send the control instruction to the motor.
Further, the synchronization module 6111 may buffer the image projected by the projection device 62 while controlling the projection device 62 to display the video source data, and after receiving the picture taken by the camera 63, take out the corresponding video source data from the memory 612, send the taken image and the video source data to the feature extraction module 6112, the feature extraction module 6112 may convert the input image into feature information strongly related to focusing by using a specific image processing algorithm or a depth learning algorithm, and send the feature information to the focusing decision module 6113, and the focusing decision module 6113 may analyze the focusing condition and control the projection device 62 to perform focusing adjustment.
The focusing decision module 6113 may analyze how to complete focusing according to the extracted focusing area; specifically, the focusing areas of the video source data and the shot images are calculated according to the obtained feature points, and since two sets of feature points are matched in the feature extraction module 6112, the adjacent feature points can be connected according to the distribution of the two sets of feature points, so as to obtain an approximate focusing area from the two images; then, the image definition evaluation function can be used to calculate the definition of the focusing area respectively, whether the current focusing is accurate or not is judged according to the difference of the definition, if not, feedback adjustment is started, namely, the projection lens 621 is controlled to move a preset distance in any direction, and the defocusing condition after the movement is judged, if the defocusing distance is increased after the last adjustment is finished, the defocusing direction is opposite to the previous adjustment direction, otherwise, the defocusing direction is the same as the previous adjustment direction; after the defocus direction is determined, the adjustment can be continued until the defocus distance is within the acceptable range.
The embodiment provides a projection system capable of focusing in real time, which includes a synchronization module 6111, a feature extraction module 6112 and a focusing decision module 6113, where the synchronization module 6111 is responsible for caching video source data sent by a video source 64, controlling a camera 63 to shoot an image at a correct time, generating a shot image, and transmitting the video source data and the shot image corresponding to the video source data to the feature extraction module 6112; the feature extraction module 6112 extracts and matches feature information independent of ambient light from the two synchronous images; the focusing decision module 6113 calculates a focusing area according to the input feature information, and calculates the definition of the focusing area in the video source data and the definition (sharpness) of the focusing area in the shot image, so as to determine the adjustment direction of the projection lens 621, and output a control instruction to the motor, and the motor can drive the projection lens 621 to move under the control of the control instruction, thereby realizing real-time and non-sensible focusing, solving the thermal defocusing and other defocusing conditions of the projection device 62 in the playing process, and providing better viewing experience for users.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium 80 provided in the present application, where the computer-readable storage medium 80 is used for storing a computer program 81, and the computer program 81 is used for implementing the real-time focusing method in the foregoing embodiment when being executed by a processor.
The computer readable storage medium 80 may be a server, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above embodiments are merely examples, and not intended to limit the scope of the present application, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present application, or those directly or indirectly applied to other related arts, are included in the scope of the present application.

Claims (12)

1. A real-time focusing method, comprising:
acquiring video source data and a shot image;
processing the video source data and the shot image to obtain characteristic information of the video source data and characteristic information of the shot image;
and generating a control instruction based on the characteristic information of the video source data and the characteristic information of the shot image, and sending the control instruction to a motor so that the motor drives the projection lens to move to realize focusing.
2. The real-time focusing method according to claim 1, wherein the step of generating a control command based on the characteristic information of the video source data and the characteristic information of the shot image, and sending the control command to a motor to enable the motor to drive a projection lens to move comprises:
determining a corresponding focusing area based on the characteristic information of the video source data and the characteristic information of the shot image;
acquiring the definition of a focusing area in the video source data, and recording the definition as a first definition;
acquiring the definition of a focusing area in the shot image, and recording the definition as a second definition;
and determining the adjustment direction of the projection lens based on the first definition and the second definition.
3. The real-time focusing method according to claim 2, further comprising:
calculating the first definition and the second definition by using an image definition evaluation function;
calculating a difference between the first sharpness and the second sharpness;
judging whether the difference value of the first definition and the second definition is greater than a preset defocusing threshold value or not;
if yes, controlling the projection lens to move a preset distance in any direction, and returning to the step of acquiring video source data and shooting images until the difference value between the first definition and the second definition is smaller than or equal to the preset defocusing threshold;
if not, determining that the focusing is successful;
wherein the preset distance is smaller than the preset defocus threshold.
4. The real-time focusing method of claim 2, wherein the feature information comprises a plurality of feature points, and the step of determining the corresponding focusing area based on the feature information of the video source data and the feature information of the captured image comprises:
matching a plurality of feature points in the video source data with a plurality of feature points in the shot image to determine corresponding matching areas;
processing the pixels of each matching area respectively to obtain a plurality of focusing characteristic points;
and respectively connecting adjacent focusing characteristic points in the plurality of focusing characteristic points to obtain corresponding focusing areas.
5. The real-time focusing method of claim 4, wherein the step of processing the pixels of each matching area to obtain a plurality of focusing feature points comprises:
respectively processing the matching region in the video source data and the matching region in the shot image by adopting a gradient operator or a Laplacian operator to obtain a plurality of high-frequency pixel points;
judging whether the pixel value of each high-frequency pixel point is larger than a preset pixel value or not;
and if so, taking the high-frequency pixel points as the focusing feature points.
6. The real-time focusing method as claimed in claim 1, wherein the step of acquiring video source data and captured images comprises:
sending the video source data to a projection device for projection display;
and sending a synchronous shooting signal to a camera device so as to control the camera device to shoot the projection display image on the projection screen, and obtaining a shooting image corresponding to the video source data.
7. A real-time focusing apparatus comprising a memory and a processor connected to each other, wherein the memory is used for storing a computer program, and the computer program is used for implementing the real-time focusing method of any one of claims 1 to 6 when being executed by the processor.
8. A projection system, comprising:
the real-time focusing device is used for receiving video source data;
the projection device is connected with the real-time focusing device and used for receiving the video source data sent by the real-time focusing device and performing projection display, wherein the projection device comprises a projection lens and a motor which are connected with each other;
the camera shooting device is connected with the real-time focusing device and is used for shooting the projection display image displayed by the projection device to obtain the shot image corresponding to the video source data;
the real-time focusing device is further used for processing the video source data and the shot image to obtain characteristic information of the video source data and characteristic information of the shot image; and generating a control instruction based on the characteristic information of the video source data and the characteristic information of the shot image, and sending the control instruction to the motor so that the motor drives the projection lens to move to realize focusing.
9. The projection system of claim 8, wherein the real-time focusing device comprises:
a processor for receiving the video source data;
the memory is connected with the processor and used for receiving the video source data sent by the processor and storing the video source data;
the processor is further used for acquiring the video source data from the memory and receiving the shot image sent by the camera device.
10. The projection system of claim 9, wherein the feature information comprises a plurality of feature points, the projection device further comprises a projection screen, and the processor comprises:
the synchronization module is connected with the memory and used for receiving the video source data and the shot image and storing the video source data into the memory;
the characteristic extraction module is connected with the synchronization module and used for processing the received video source data and the shot image to obtain a plurality of characteristic points in the video source data and a plurality of characteristic points in the shot image;
and the focusing decision module is connected with the feature extraction module and used for determining the adjustment direction of the projection lens based on the plurality of feature points of the video source data and the plurality of feature points of the shot image, generating a control instruction corresponding to the adjustment direction and sending the control instruction to the motor.
11. The projection system of claim 10,
the projection system further comprises a video source, wherein the video source is connected with the synchronization module and used for sending projection data to the synchronization module, and the projection data comprises at least one frame of video source data.
12. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, is adapted to implement the real-time focusing method of any one of claims 1-6.
CN202010980932.2A 2020-09-17 2020-09-17 Real-time focusing method, device, system and computer readable storage medium Pending CN114286064A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010980932.2A CN114286064A (en) 2020-09-17 2020-09-17 Real-time focusing method, device, system and computer readable storage medium
PCT/CN2021/116773 WO2022057670A1 (en) 2020-09-17 2021-09-06 Real-time focusing method, apparatus and system, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010980932.2A CN114286064A (en) 2020-09-17 2020-09-17 Real-time focusing method, device, system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114286064A true CN114286064A (en) 2022-04-05

Family

ID=80776392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010980932.2A Pending CN114286064A (en) 2020-09-17 2020-09-17 Real-time focusing method, device, system and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN114286064A (en)
WO (1) WO2022057670A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666558A (en) * 2022-04-13 2022-06-24 深圳市火乐科技发展有限公司 Method and device for detecting definition of projection picture, storage medium and projection equipment
CN114760415A (en) * 2022-04-18 2022-07-15 上海千映智能科技有限公司 Lens focusing method, system, device and medium
CN115361541A (en) * 2022-10-20 2022-11-18 潍坊歌尔电子有限公司 Method and device for recording projection content of projector, projector and storage medium
CN116095477A (en) * 2022-08-16 2023-05-09 荣耀终端有限公司 Focusing processing system, method, equipment and storage medium
CN117319618A (en) * 2023-11-28 2023-12-29 维亮(深圳)科技有限公司 Projector thermal focus out judging method and system for definition evaluation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5115032B2 (en) * 2007-05-29 2013-01-09 コニカミノルタアドバンストレイヤー株式会社 Image projection apparatus and image projection method
CN101840055B (en) * 2010-05-28 2012-01-18 浙江工业大学 Video auto-focusing system based on embedded media processor
CN107942601A (en) * 2017-12-25 2018-04-20 天津天地伟业电子工业制造有限公司 A kind of stepper motor lens focus method based on temperature-compensating
CN111050150B (en) * 2019-12-24 2021-12-31 成都极米科技股份有限公司 Focal length adjusting method and device, projection equipment and storage medium
CN113132620B (en) * 2019-12-31 2022-10-11 华为技术有限公司 Image shooting method and related device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666558A (en) * 2022-04-13 2022-06-24 深圳市火乐科技发展有限公司 Method and device for detecting definition of projection picture, storage medium and projection equipment
CN114760415A (en) * 2022-04-18 2022-07-15 上海千映智能科技有限公司 Lens focusing method, system, device and medium
CN114760415B (en) * 2022-04-18 2024-02-02 上海千映智能科技有限公司 Lens focusing method, system, equipment and medium
CN116095477A (en) * 2022-08-16 2023-05-09 荣耀终端有限公司 Focusing processing system, method, equipment and storage medium
CN116095477B (en) * 2022-08-16 2023-10-20 荣耀终端有限公司 Focusing processing system, method, equipment and storage medium
CN115361541A (en) * 2022-10-20 2022-11-18 潍坊歌尔电子有限公司 Method and device for recording projection content of projector, projector and storage medium
CN115361541B (en) * 2022-10-20 2023-01-24 潍坊歌尔电子有限公司 Method and device for recording projection content of projector, projector and storage medium
CN117319618A (en) * 2023-11-28 2023-12-29 维亮(深圳)科技有限公司 Projector thermal focus out judging method and system for definition evaluation
CN117319618B (en) * 2023-11-28 2024-03-19 维亮(深圳)科技有限公司 Projector thermal focus out judging method and system for definition evaluation

Also Published As

Publication number Publication date
WO2022057670A1 (en) 2022-03-24

Similar Documents

Publication Publication Date Title
US10997696B2 (en) Image processing method, apparatus and device
CN114286064A (en) Real-time focusing method, device, system and computer readable storage medium
US9307134B2 (en) Automatic setting of zoom, aperture and shutter speed based on scene depth map
JP5870264B2 (en) Imaging apparatus, imaging method, program, and integrated circuit
US8629915B2 (en) Digital photographing apparatus, method of controlling the same, and computer readable storage medium
US8274552B2 (en) Primary and auxiliary image capture devices for image processing and related methods
CN101309367B (en) Imaging apparatus
US20180070015A1 (en) Still image stabilization/optical image stabilization synchronization in multi-camera image capture
KR101442610B1 (en) Digital photographing apparatus, method for controlling the same, and recording medium storing program to implement the method
US20110085049A1 (en) Method and apparatus for image stabilization
CN108156369B (en) Image processing method and device
US20120307009A1 (en) Method and apparatus for generating image with shallow depth of field
TWI394085B (en) Method of identifying the dimension of a shot subject
KR20100013171A (en) Method and apparatus for compensating a motion of the autofocus region, and autofocus method and apparatus using thereof
CN110245549A (en) Real-time face and object manipulation
US8922677B2 (en) Image processing apparatus and imaging apparatus for combining image frames
JP6016622B2 (en) Imaging apparatus, control method therefor, and program
KR20100046544A (en) Image distortion compensation method and apparatus
CN116347056A (en) Image focusing method, device, computer equipment and storage medium
CN107911609B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
Bisagno et al. Virtual camera modeling for multi-view simulation of surveillance scenes
KR102061087B1 (en) Method, apparatus and program stored in storage medium for focusing for video projector
CN115720293B (en) Camera focusing control method and system
US20230276127A1 (en) Image processing apparatus and image processing method
WO2023189079A1 (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination