WO2022057670A1 - 一种实时对焦方法、装置、系统和计算机可读存储介质 - Google Patents

一种实时对焦方法、装置、系统和计算机可读存储介质 Download PDF

Info

Publication number
WO2022057670A1
WO2022057670A1 PCT/CN2021/116773 CN2021116773W WO2022057670A1 WO 2022057670 A1 WO2022057670 A1 WO 2022057670A1 CN 2021116773 W CN2021116773 W CN 2021116773W WO 2022057670 A1 WO2022057670 A1 WO 2022057670A1
Authority
WO
WIPO (PCT)
Prior art keywords
video source
source data
captured image
real
projection
Prior art date
Application number
PCT/CN2021/116773
Other languages
English (en)
French (fr)
Inventor
弓殷强
邓岳慈
余新
李屹
Original Assignee
深圳光峰科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光峰科技股份有限公司 filed Critical 深圳光峰科技股份有限公司
Publication of WO2022057670A1 publication Critical patent/WO2022057670A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Definitions

  • Autofocus is a very important function of the projector, so that the projector can be adapted to more usage scenarios.
  • the specific needs can be refined into the following two aspects: reset the focus surface after the first installation or move the position; In the open state, temperature changes will occur due to heat generation, and various components of the lens and optical path will cause thermal defocusing phenomenon under temperature changes; Real-time-like blurring cannot be corrected by current autofocus technology.
  • the traditional focusing scheme needs to suspend the display and focus on a specific reference image, which cannot achieve non-sensing focusing; in addition, the traditional focusing scheme can also adjust the angle of the focusing motor by setting the motion sensor, but due to the error and noise of the motion sensor. Larger, the movement of the projector and the relative relationship between the projector and the screen are more complicated, and it is impossible to get better results, and it is impossible to adjust the blur caused by thermal defocus.
  • the present application provides a real-time focusing method, device, system and computer-readable storage medium, which can realize automatic real-time focusing.
  • the technical solution adopted in the present application is to provide a real-time focusing method, the method includes: acquiring video source data and captured images; The feature information of the captured image; based on the feature information of the video source data and the feature information of the captured image, a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
  • the real-time focusing device includes a memory and a processor that are connected to each other, wherein the memory is used to store a computer program, and the computer program is used by the processor. When executed, it is used to implement the above-mentioned real-time focusing method.
  • the projection system includes: a real-time focusing device, a projection device and a camera device, the real-time focusing device is used for receiving video source data;
  • the focusing device is connected to receive the video source data sent by the real-time focusing device and perform projection display, wherein the projection device includes a projection lens and a motor that are connected to each other;
  • the image is shot to obtain a shot image corresponding to the video source data; wherein, the real-time focusing device is also used to process the video source data and the shot image to obtain feature information of the video source data and feature information of the shot image; based on the video source data
  • the feature information and the feature information of the captured image are generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
  • another technical solution adopted in this application is to provide a computer-readable storage medium, the computer-readable storage medium is used to store a computer program, and when the computer program is executed by a processor, it is used to realize the above-mentioned Live focus method.
  • the beneficial effects of the present application are: the received video source data is stored, and the projection device is controlled to display the video source data to obtain a captured image corresponding to the video source data; then the video source data and the captured image are processed. processing to obtain the corresponding feature information; using the feature information of the video source data and the feature information of the captured image, the adjustment direction of the projection lens can be determined; then according to the adjustment direction, the corresponding control instructions are generated to control the motor to drive the projection lens to move, which can be Focusing is performed at the same time as playing the screen, without manual focusing, real-time focusing and non-sensing focusing can be achieved, and thermal defocus can be corrected.
  • FIG. 1 is a schematic flowchart of an embodiment of a real-time focusing method provided by the present application
  • FIG. 2 is a schematic flowchart of another embodiment of a real-time focusing method provided by the present application.
  • FIG. 3 is a schematic flowchart of step 131 in the embodiment shown in FIG. 2;
  • FIG. 4 is a schematic diagram of a captured image in the embodiment shown in FIG. 2;
  • FIG. 5 is a schematic structural diagram of an embodiment of a real-time focusing device provided by the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a projection system provided by the present application.
  • Fig. 7 is the connection schematic diagram of the video source and the real-time focusing device in the embodiment shown in Fig. 6;
  • FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by the present application.
  • the application provides a real-time focusing method, the method includes:
  • the video source data and the captured image are processed to obtain the feature information of the video source data and the feature information of the captured image;
  • a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
  • FIG. 1 is a schematic flowchart of an embodiment of a real-time focusing method provided by the present application. The method includes:
  • Step 11 Acquire video source data and capture images.
  • the video source data can be the pixel value of the image, different video source data can be received in different time periods, the video source data can be acquired online in real time, the video source data sent by other devices can be received, or the video source can be read from a storage device data; in order to facilitate the subsequent comparison with the captured images, the video source data can be stored in the real-time focusing device. It can be understood that the amount of video source data stored in the real-time focusing device is relatively small, and can be cleaned regularly, and the video source signals within a preset time before and after the current video source data can be obtained in real time, for example, the video source within 5 seconds can be obtained. data, which is cached.
  • the video source data sent by the video source can be received, and then the video source data can be sent to the projection device, and the projection device can be controlled to display the video source data to generate a projection display image; after the projection device starts to display, the synchronous shooting can be sent.
  • the signal is sent to the camera device to control the camera device to shoot the projected display image on the projection screen, so as to obtain the captured image corresponding to the video source data, and the captured image is at least partially the same as the video source data; in addition, the real-time focusing device Take out the video source data of the same frame that is buffered, so as to compare the video source data with the captured image.
  • all the received video source data may also be buffered, and then a set of video source data is retrieved from the stored video source data for projection display.
  • the acquisition of video source data may be performed every frame, or may be performed every several frames or several seconds.
  • Step 12 Process the video source data and the captured image to obtain feature information of the video source data and feature information of the captured image.
  • the captured image and video source data can be compared. Contrast; specifically, only a part of the captured image can be compared with a part of the video source data, that is, a local comparison method is used, or the entire region of the captured image can be compared with all the data of the video source data, that is, a global comparison is used.
  • Comparison method if the video source data is clear and the captured image is clear, no focus adjustment is required; if the video source data is clear but the captured image is blurred, refocusing is required; further, after the captured image is acquired, image processing methods can be used to adjust the focus.
  • the video source data and the captured image are analyzed and processed, and the feature information of the video source data and the feature information of the captured image are extracted respectively.
  • blurring and clarity are relative terms. If the difference between the sharpness of the video source data and the captured image is within a range, it can be determined that the two are equally clear or blurred; otherwise, the captured image can be determined to be blurred.
  • the feature information includes feature points
  • a specific image processing algorithm or a deep learning algorithm can be used to process the video source data and the captured image to obtain a plurality of feature points and captured images in the video source data respectively.
  • multiple feature points in the image for example, feature points in the image can be extracted using a feature extraction algorithm.
  • Step 13 Based on the feature information of the video source data and the feature information of the captured image, a control command is generated, and the control command is sent to the motor, so that the motor drives the projection lens to move and realize focusing.
  • the projection device includes a projection lens and a motor that are connected to each other. After the feature information is generated, a control command can be generated by using the feature information, and then the control command is sent to the motor, so that the motor drives the projection lens to move, so as to adjust the focal length and focus. position to achieve focus.
  • the adjustment of the position of the projection lens is taken as an example for description, which specifically includes the following steps:
  • Step 131 Determine the corresponding focus area based on the feature information of the video source data and the feature information of the captured image.
  • the comparison is to compare the captured image with the source video data.
  • the captured picture ie, the captured image
  • the actual projected images ie, projected display images
  • the video source data can be matched with the captured images first; further, the video sources can be compared.
  • the data and the part of the captured image find the video source data corresponding to at least part of the captured image, and establish the corresponding relationship between the video source data and the captured image; for example, as shown in Figure 4, the captured image is recorded as A, and the captured image
  • the local area in A is denoted as B, and the pixel values in the local area B are matched with the video source data, and the pixel value in the video source data with the smallest difference between the pixel values in the local area B is found.
  • the area composed of these pixel values is The area that matches the local area B is the matching area.
  • the distortion of the captured image can be removed, and then the corners or other feature points of the image can be found, and the positions of these corners or feature points in the video source data and the captured image correspond to relationship to find the matching area between the video source data and the captured image.
  • the ambient light subtraction process may be performed after the captured image is de-distorted; in the case of weak ambient light or uniform ambient light, the ambient light reduction process may not be performed.
  • Ambient light subtraction processing; or the difference between two different frames can be used to achieve ambient light subtraction. If the difference between two frames is less than a certain threshold, ambient light subtraction processing is not required, for example, the display is still Ambient light subtraction processing is not performed on the screen.
  • Step 1312 Process the pixels of each matching area respectively to obtain a plurality of focus feature points.
  • the matching area After matching, in order to further decide which part of the image to use for comparison, one or more specific areas can be selected from the matching area, but if there is no sharp image in the found area, you need to use some special
  • the algorithm has higher requirements on the algorithm and longer processing time. Therefore, it is possible to find the sharpest area in the matching area, that is, the high spatial frequency area, and the high spatial frequency area includes multiple focus feature points.
  • a gradient operator, a Laplacian operator or other edge extraction operators can be used to process the matching area in the video source data and the matching area in the captured image, for example, to extract pixels that change drastically in the matching area. , to obtain a plurality of corresponding high-frequency pixel points; specifically, the position where the pixel value changes violently corresponds to the high-frequency signal area (ie, the high spatial frequency area) in the image, such as the edge; the position where the pixel value changes little is for the low-frequency signal area. , such as a large color block area; because the high-frequency signal area is generally the edge or contour of the image, which can better represent the contour information of the image, so the high-frequency signal area is selected to determine the focus situation.
  • the high-frequency signal area is generally the edge or contour of the image, which can better represent the contour information of the image, so the high-frequency signal area is selected to determine the focus situation.
  • the pixel changes of each matching area can be counted, and filtered by the set threshold, and the area rich in sharply changed pixels is regarded as a high spatial frequency area, that is, it is determined whether the pixel value of each high frequency pixel is greater than the preset pixel. value; if the pixel value of the high-frequency pixel point is greater than the preset pixel value, the high-frequency pixel point is used as the focus feature point; if the pixel value of the high-frequency pixel point is less than or equal to the preset pixel value, the high-frequency pixel point is used. Pixels are discarded.
  • Step 1313 Connect the adjacent focus feature points among the multiple focus feature points respectively to obtain the corresponding focus area.
  • the adjacent focus feature points among the multiple focus features corresponding to the video source data can be connected. Specifically, the distance between each focus feature point and other focus feature points can be calculated, and the distance The two shortest focus feature points are used as adjacent focus feature points, and then the adjacent focus feature points are connected with straight lines to obtain a closed area, that is, the focus area of the video source data; according to the same method, corresponding to the captured image
  • the focus area of the captured image can be obtained by processing the multiple focus feature points, so as to find the approximate focus area from the two images.
  • a plurality of disjoint high spatial frequency regions can be generated simultaneously in order to obtain a more accurate focusing result.
  • Step 132 Acquire the sharpness of the focus area in the video source data, denoted as the first sharpness, obtain the sharpness of the focus area in the captured image, and record it as the second sharpness; based on the first sharpness and the second sharpness, determine The adjustment direction of the projection lens.
  • the image sharpness evaluation function can be used to calculate the first sharpness and the second sharpness, and then calculate the difference between the first sharpness and the second sharpness, and the difference of sharpness can be used to measure the focus situation and the defocus distance, the larger the difference between the two, the larger the defocus distance; specifically, the parameters corresponding to each high spatial frequency region that can indicate high frequency information can be calculated at the same time, and the parameter is used as the clarity, for example , the high frequency part of the spatial spectrum, the sum of the squares of the gradient values, or the sum of the absolute values of the Laplacian operator; in addition, in order to ensure comparison between different images, the difference between the two can also be calculated for area or total brightness and other parameters for normalization; if there are multiple high spatial frequency regions, each can be normalized according to the area and then averaged to obtain the final result.
  • the final defocus distance in order to improve the stability of the system and obtain a stable defocus distance, can be filtered, for example, an average value filter can be used or the average value of the maximum value and the minimum value can be removed. filtering, etc.
  • the defocusing distance increases after the last adjustment, it means that the defocusing direction is opposite to the last adjustment direction; If the defocus distance decreases after the last adjustment, it means that the defocus direction is the same as the last adjustment direction; after judging the defocus direction, a single fine-tuning or multiple feedback fine-tuning can be performed until the first
  • the difference between the sharpness and the second sharpness is less than or equal to the preset defocusing threshold, and the preset distance is smaller than the preset defocusing threshold, that is, the threshold of the defocusing degree is greater than the defocusing caused by moving the preset distance to prevent adjustment.
  • the optimal focusing position is exceeded; if the difference between the first sharpness and the second sharpness is less than or equal to the preset defocusing threshold, it is determined that the focusing is successful, and no adjustment is required.
  • the matching area between the video source data and the captured image can be calculated according to the obtained feature points, and the corresponding focus feature points can be obtained by processing the pixels in the matching area; based on the distribution of the two groups of focus feature points, by connecting The approximate focus area can be obtained from adjacent focus feature points; according to the obtained focus area, the image sharpness evaluation function can be used to calculate the sharpness of the focus area respectively. If the focus is inaccurate, it will start feedback adjustment, and it can analyze how to complete the focus according to the extracted focus area, and realize automatic focus.
  • FIG. 5 is a schematic structural diagram of an embodiment of a real-time focusing device provided by the present application.
  • the real-time focusing device 50 includes a memory 51 and a processor 52 that are connected to each other.
  • the memory 51 is used for storing computer programs.
  • the controller 52 is executed, it is used to realize the above-mentioned real-time focusing method.
  • This embodiment provides a reliable real-time focusing device 50, which can solve the problem that when the projection device is out of focus, the user does not need to manually determine and control the focus, but can automatically determine the out-of-focus situation.
  • a reliable real-time focusing device 50 which can solve the problem that when the projection device is out of focus, the user does not need to manually determine and control the focus, but can automatically determine the out-of-focus situation.
  • FIG. 6 is a schematic structural diagram of an embodiment of a projection system provided by the present application.
  • the projection system includes: a real-time focusing device 61, a projection device 62, and a camera device 63.
  • the projection device 62 includes a projection lens 621 and a projection screen 622.
  • the imaging device 63 may be a camera.
  • the calibration parameters include: the distortion parameters of the projection device 62 and the camera device 63 Distortion parameters , the relative position and relative direction of the projection device 62 and the camera device 63 and other information.
  • the corresponding relationship between the projection pixels and the camera pixels can be roughly confirmed.
  • the distance of the projection screen 622 is not determined, the corresponding relationship between the projection area and the photographing area cannot be determined even after the internal and external parameters are calibrated.
  • the coordinates of each corner point on the camera negative will be described below on how to determine the corresponding relationship.
  • two adjacent frames of video source data can be down-sampled and made difference, and statistics for each The difference value of each area, and the area with a large difference between the two frames is used as the characteristic area.
  • the actual display image corresponding to the two frames of video source data is made difference, and the characteristic area in the video source data is detected in the actual display image. position (for example, using the cross-correlation function of the sliding window), the correspondence between the projected pixels and the camera pixels can be confirmed.
  • the real-time focusing device 61 provided in this embodiment can be used for adjustment to realize focusing.
  • the real-time focusing device 61 is used to obtain video source data; the projection device 62 is connected to the real-time focusing device 61, which is used to receive the video source data sent by the real-time focusing device 61, perform projection display, and form a projection display image; specifically, the projection lens 621
  • the focusing position of the camera is controlled by the real-time focusing device 61 .
  • the projection lens 621 can project it out in real time to form a captured image on the projection screen 622 .
  • the camera device 63 is connected to the real-time focusing device 61, which is used for shooting the projection display image displayed by the projection device 62 to obtain the captured image corresponding to the video source data; After the synchronous shooting signal is received, the projection display image generated by the projection device 62 is captured to obtain the captured image, and the captured image is sent back to the real-time focusing device 61 .
  • the real-time focusing device 61 is also used to process the video source data and the captured image to obtain the feature information of the video source data and the feature information of the captured image; based on the feature information of the video source data and the feature information of the captured image, a control instruction is generated to The control command is sent to the motor (not shown in the figure), so that the motor drives the projection lens 621 to move to achieve focusing; further, the position of the projection lens 621 or the parameters of the projection lens 621 can be adjusted, and this embodiment only adjusts the projection lens The position of 621 is taken as an example for description.
  • the projection system further includes a video source 64, and the video source 64 is used to send the video source data to the synchronization module 6111;
  • the real-time focusing device 61 includes a processor 611 and a memory 612 that are connected to each other.
  • the processor 611 is used to receive the video source data;
  • the memory 612 is used to receive the video source data sent by the processor 611 and store the video source data, and the processor 611 can also obtain the video source data from the memory 612 and receive the camera device 63 sent captured images.
  • the synchronization module 6111 is connected to the memory 612 and the video source 64 , and is used for receiving the video source data and the captured image sent by the camera 63 , and storing the video source data in the memory 612 .
  • the feature extraction module 6112 is connected to the synchronization module 6111, and is used for processing the received video source data and the captured image to obtain multiple feature points in the video source data and multiple feature points in the captured image.
  • the focus decision module 6113 is connected with the feature extraction module 6112, which is used to determine the adjustment direction of the projection lens 621 based on the multiple feature points of the video source data and the multiple feature points of the captured image, and generate a control instruction corresponding to the adjustment direction, and Control commands are sent to the motor.
  • the synchronization module 6111 can cache the image projected by the projection device 62, and after receiving the picture captured by the camera device 63, the corresponding video source data is stored from the memory. Take out in 612, and send the captured image and video source data to the feature extraction module 6112.
  • the feature extraction module 6112 can use a specific image processing algorithm or a deep learning algorithm to convert the input image into feature information related to the focus intensity, and the The feature information is sent to the focus decision module 6113, and the focus decision module 6113 can analyze the focus situation and control the projection device 62 to perform focus adjustment.
  • the projection system includes a synchronization module 6111 , a feature extraction module 6112 and a focus decision module 6113 .
  • the synchronization module 6111 is responsible for buffering the video source data sent by the video source 64 , and controls the camera 63 to Capture the image at the correct time, generate the captured image, and pass the video source data and its corresponding captured image to the feature extraction module 6112; the feature extraction module 6112 extracts and matches the feature information that does not depend on ambient light from the two synchronized images;
  • the focus decision module 6113 calculates the focus area according to the input feature information, and calculates the sharpness of the focus area in the video source data and the sharpness (sharpness) of the focus area in the captured image, thereby judging the adjustment direction of the projection lens 621, and the output control
  • the command is sent to the motor, and the motor can drive the projection lens 621 to move under the control of the control command to realize real-time non-sensing focusing, which can
  • FIG. 8 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by the present application.
  • the computer-readable storage medium 80 is used to store a computer program 81, and when the computer program 81 is executed by a processor, it is used to realize The real-time focusing method in the above embodiment.
  • the computer-readable storage medium 80 can be a server, a U disk, a mobile hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, etc. medium of program code.
  • the disclosed method and device may be implemented in other manners.
  • the device implementations described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other divisions for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • Units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this implementation manner.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Projection Apparatus (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种实时对焦方法、装置、系统和计算机可读存储介质,该方法包括:获取视频源数据与拍摄图像;对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。通过上述方式,本申请能够实现自动实时对焦。

Description

一种实时对焦方法、装置、系统和计算机可读存储介质 技术领域
本申请涉及投影技术领域,具体涉及一种实时对焦方法、装置、系统和计算机可读存储介质。
背景技术
自动对焦是投影机非常重要的功能,使得投影机可以适应于更多的使用场景,具体的需求可以细化为如下两个方面:在首次安装或移动位置之后重新设置对焦面;在投影机长期开启的状态下,会由于产热而发生温度变化,镜头和光路各个部件在温度变化下会产生热失焦现象;另外,在特定工况下也可能会由于震动产生错位,进而导致模糊,这类实时的模糊现象无法被目前的自动对焦技术矫正。传统的对焦方案需要暂停显示并且打出特定的参考图进行对焦,无法实现无感对焦;此外,传统的对焦方案还可以通过设置运动传感器来调整对焦马达的角度,然而由于运动传感器的误差和噪声都较大、投影机的运动以及投影机与屏幕的相对关系比较复杂,无法得到较好的效果,且无法调整热失焦带来的模糊。
发明内容
本申请提供一种实时对焦方法、装置、系统和计算机可读存储介质,能够实现自动实时对焦。
为解决上述技术问题,本申请采用的技术方案是提供一种实时对焦方法,该方法包括:获取视频源数据与拍摄图像;对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。
为解决上述技术问题,本申请采用的另一技术方案是提供一种实时对 焦装置,该实时对焦装置包括互相连接的存储器和处理器,其中,存储器用于存储计算机程序,计算机程序在被处理器执行时,用于实现上述的实时对焦方法。
为解决上述技术问题,本申请采用的另一技术方案是提供一种投影系统,该投影系统包括:实时对焦装置、投影装置以及摄像装置,实时对焦装置用于接收视频源数据;投影装置与实时对焦装置连接,用于接收实时对焦装置发送的视频源数据,进行投影显示,其中,投影装置包括互相连接的投影镜头与电机;摄像装置与实时对焦装置连接,用于对投影装置显示的投影显示图像进行拍摄,得到与视频源数据对应的拍摄图像;其中,实时对焦装置还用于对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。
为解决上述技术问题,本申请采用的另一技术方案是提供一种计算机可读存储介质,该计算机可读存储介质用于存储计算机程序,计算机程序在被处理器执行时,用于实现上述的实时对焦方法。
通过上述方案,本申请的有益效果是:对接收的视频源数据进行存储,控制投影装置显示该视频源数据,以得到与该视频源数据对应的拍摄图像;然后对视频源数据与拍摄图像进行处理,得到相应的特征信息;利用视频源数据的特征信息与拍摄图像的特征信息,可确定投影镜头的调整方向;然后根据调整方向生成相应的控制指令,以控制电机带动投影镜头移动,能够在播放画面的同时进行对焦,无需人工调焦,可以实现实时对焦与无感对焦,且可对热失焦进行补正。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。其中:
图1是本申请提供的实时对焦方法一实施例的流程示意图;
图2是本申请提供的实时对焦方法另一实施例的流程示意图;
图3是图2所示的实施例中步骤131的流程示意图;
图4是图2所示的实施例中拍摄图像的示意图;
图5是本申请提供的实时对焦装置一实施例的结构示意图;
图6是本申请提供的投影系统一实施例的结构示意图;
图7是图6所示的实施例中视频源与实时对焦装置的连接示意图;
图8是本申请提供的计算机可读存储介质一实施例的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性的劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供一种实时对焦方法,该方法包括:
获取视频源数据与拍摄图像;
对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;
基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。
下面结合具体的实施例进行说明:
请参阅图1,图1是本申请提供的实时对焦方法一实施例的流程示意图,该方法包括:
步骤11:获取视频源数据与拍摄图像。
该视频源数据可以为图像的像素值,在不同时间段可接收到不同的视频源数据,可实时在线获取该视频源数据、接收其他装置发送的视频源数据或者从存储设备中读取视频源数据;为了方便后续与拍摄图像进行比对,可将该视频源数据存储在实时对焦装置中。可以理解地,实时对焦装置中 存储的视频源数据的数量比较小,且可被定期清理,可实时获取当前视频源数据前后预设时间内的视频源信号,比如,获取5秒内的视频源数据,对其进行缓存。
进一步地,可接收视频源发送的视频源数据,然后将该视频源数据发送至投影装置,并控制投影装置显示该视频源数据,生成投影显示图像;在投影装置开始显示后,可发送同步拍摄信号至摄像装置,以控制摄像装置对投影屏幕上的投影显示图像进行拍摄,从而得到与该视频源数据对应的拍摄图像,该拍摄图像与视频源数据至少部分内容相同;另外,可实时对焦装置从取出被缓存的同一帧视频源数据,以便将该视频源数据与拍摄图像进行比对。
在其他实施例中,还可将接收到的视频源数据全部缓存,然后从存储的视频源数据中取出一组视频源数据以进行投影显示。
可以理解地,获取视频源数据可以每一帧都进行,也可以每隔若干帧或者若干秒进行一次。
步骤12:对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息。
可根据拍摄图像的模糊程度来判断是否需要进行对焦,但是如果仅根据拍摄图像的模糊程度来判断,难以判别是否是虚焦导致的模糊所导致的显示模糊,因此可将拍摄图像与视频源数据进行对比;具体地,可以仅将拍摄图像的一部分区域与视频源数据的一部分数据进行对比,即采用局部对比方式,或者将拍摄图像的全部区域与视频源数据的全部数据进行对比,即采用全局对比方式;如果视频源数据清晰且拍摄图像清晰,则无需进行对焦调节;如果视频源数据清晰而拍摄图像模糊,则需要重新对焦;进一步地,在获取到拍摄图像后,可利用图像处理方法对视频源数据与拍摄图像进行分析处理,分别提取出视频源数据的特征信息与拍摄图像的特征信息。
可以理解地,模糊与清晰是相对而言的,如果视频源数据与拍摄图像的清晰度的差值在一范围以内,则可判定二者同样清晰或者模糊,否则可判定拍摄图像模糊。
在一具体的实施例中,该特征信息包括特征点,可利用特定的图像处理算法或深度学习算法对视频源数据与拍摄图像进行处理,分别得到视频源数据中的多个特征点与拍摄图像中的多个特征点;例如,可使用特征提取算法提取出图像中的特征点。
步骤13:基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机,以使得电机驱动投影镜头移动,实现对焦。
投影装置包括互相连接的投影镜头与电机,在生成了特征信息后,可利用该特征信息生成控制指令,然后将该控制指令发送至电机处,使得电机带动投影镜头移动,以实现调整焦距和聚焦位置,从而实现对焦。
在一具体的实施例中,以调整投影镜头的位置为例进行说明,具体包括以下步骤:
步骤131:基于视频源数据的特征信息与拍摄图像的特征信息,确定相应的对焦区域。
在提取出视频源数据的特征点与拍摄图像的特征点之后,可利用这些特征点去确定对焦区域,采用图3所示的方式进行处理,具体包括以下步骤:
步骤1311:将视频源数据中的多个特征点与拍摄图像中的多个特征点进行匹配,确定相应的匹配区域。
比对是将拍摄图像和源视频数据进行对比,但是由于摄像装置的位置或视角的影响,使得利用摄像装置对投影装置显示的画面进行拍摄时,可能导致拍摄到的画面(即拍摄图像)和实际的投影画面(即投影显示图像)不完全相同,此时拍摄图像无法表示实际的投影画面,因而为了方便进行对比,可先将视频源数据与拍摄图像进行匹配;进一步地,可对比视频源数据与拍摄图像的局部,找到与该拍摄图像的至少部分区域对应的视频源数据,建立起视频源数据和拍摄图像的对应关系;例如,如图4所示,拍摄图像记作A,拍摄图像A中的局部区域记作B,将局部区域B中像素值与视频源数据进行匹配,找到视频源数据中与局部区域B中的像素值差异最小的像素值,这些像素值组成的区域即为与局部区域B相匹配的区域, 即匹配区域。
进一步地,可在匹配之前,先去除拍摄到的拍摄图像的畸变,然后找到图像的角点或者其他特征点,根据这些角点或特征点在视频源数据和拍摄到的拍摄图像中的位置对应关系,找到视频源数据和拍摄图像的匹配区域。
在一具体的实施例中,为了防止环境光影响到图像处理的过程,可以在对拍摄图像去畸变之后进行环境光减除处理;在环境光较弱或者环境光均匀的情形下,也可不进行环境光减除处理;或者也可以利用不同两帧之间的差值实现环境光减除,如果两帧的差距小于某一设定的阈值,也可不进行环境光减除处理,比如,显示静止画面时不进行环境光减除处理。
步骤1312:分别对每个匹配区域的像素进行处理,得到多个对焦特征点。
在匹配了之后,为了进一步决定用图像中的哪个局部来进行对比,可从匹配区域中选取一个或多个特定的区域,但是如果找到的区域中没有比较锐利的图像,需要再利用一些特殊的算法进行处理,对算法的要求更高,处理时间较长,因此可以去寻找匹配区域中最锐利的区域,即高空间频率区域,该高空间频率区域包括多个对焦特征点。
进一步地,可采用梯度算子或拉普拉斯算子或其他边缘提取算子对视频源数据中的匹配区域与拍摄图像中的匹配区域进行处理,比如,提取出匹配区域中变化剧烈的像素,得到相应的多个高频像素点;具体地,像素值变化激烈的位置对应图像中的高频信号区域(即高空间频率区域),比如边缘;像素值变化不大的位置对于低频信号区域,比如大片色块区;由于高频信号区域一般是图像的边缘或轮廓,更能表示图像的轮廓信息,因而选择高频信号区域来判定对焦情况。
然后可统计每个匹配区域的像素变化情况,并通过设置的阈值进行筛选,将富含剧烈变化像素的区域作为高空间频率区域,即判断每个高频像素点的像素值是否大于预设像素值;若该高频像素点的像素值大于预设像素值,则将高频像素点作为对焦特征点;若该高频像素点的像素值小于或等于预设像素值,则将该高频像素点丢弃。
步骤1313:分别将多个对焦特征点中相邻的对焦特征点进行连接,得到相应的对焦区域。
在获取到对焦特征点后,可将视频源数据对应的多个对焦特征中相邻的对焦特征点连接,具体地,可计算每个对焦特征点与其他对焦特征点之间的距离,将距离最短的两个对焦特征点作为相邻的对焦特征点,然后将相邻的对焦特征点用直线连接起来,得到一个闭合的区域,即视频源数据的对焦区域;按照同样的方法对拍摄图像对应的多个对焦特征点进行处理,可得到拍摄图像的对焦区域,从而实现从这两幅图像中找出近似的对焦区域。
可以理解地,可以同时生成多个不相交的高空间频率区域,以便得到更加精确的对焦结果。
步骤132:获取视频源数据中对焦区域的清晰度,记作第一清晰度,获取拍摄图像中对焦区域的清晰度,记作第二清晰度;基于第一清晰度与第二清晰度,确定投影镜头的调整方向。
在获取到对焦区域之后,可利用图像清晰度评价函数计算第一清晰度与第二清晰度,再计算第一清晰度与第二清晰度的差值,清晰度的差值可以用来衡量对焦情况以及离焦距离,二者的差值越大,离焦距离越大;具体地,可同时计算每个高空间频率区域对应的可以指示高频信息的参数,将该参数作为清晰度,例如,空间频谱的高频部分、梯度值的平方和或拉普拉斯算子的绝对值的和;此外,为了保证在不同图像之间进行比较,还可对二者的差异针对面积或总亮度等参数进行归一化;如果存在多个高空间频率区域,可以在各自根据面积归一化后再求平均值,得到最终的结果。
在一具体的实施例中,为了提高系统的稳定性,获取一个稳定的离焦距离,可对最终的离焦距离进行滤波,例如,可采用平均值滤波或去掉最大值与最小值的平均值滤波等。
然后可判断第一清晰度与第二清晰度的差值是否大于预设离焦阈值;若第一清晰度与第二清晰度的差值大于预设离焦阈值,则进行反馈调节,可控制投影镜头向任意方向移动预设距离,并继续判断离焦情况,即返回步骤11,如果在进行完上一次调节后,离焦距离增大,则表示离焦方向与 上一次的调节方向相反;如果在进行完上一次调节后,离焦距离减小,则表示离焦方向与上一次的调节方向相同;在判断了离焦方向后,可进行单次微调或多次反馈微调,直至第一清晰度与第二清晰度的差值小于或等于预设离焦阈值,该预设距离小于预设离焦阈值,即离焦程度的阈值大于移动预设距离所引起的离焦,以防止调整预设距离后导致越过最佳调焦位置;若第一清晰度与第二清晰度的差值小于或等于预设离焦阈值,则确定对焦成功,无需进行调整。
本实施例可先根据所得的特征点计算出视频源数据和拍摄图像的匹配区域,通过对匹配区域的像素进行处理,可得到相应的对焦特征点;基于两组对焦特征点的分布,通过连接相邻的对焦特征点可得到近似的对焦区域;根据所得到的对焦区域,可以使用图像清晰度评价函数分别计算对焦区域的清晰度,根据两个清晰度的差距可判定当前对焦是否精确,如果对焦不精确就开始反馈调节,能够根据提取的对焦区域分析如何完成对焦,实现自动对焦。
请参阅图5,图5是本申请提供的实时对焦装置一实施例的结构示意图,实时对焦装置50包括互相连接的存储器51和处理器52,存储器51用于存储计算机程序,计算机程序在被处理器52执行时,用于实现上述的实时对焦方法。
本实施例提供了一种可靠的实时对焦装置50,可以解决在投影装置失焦情况发生时,用户无需通过人工判断和控制来对焦,而是可以自动判断失焦情况,在显示任意视频图像时能够实时自动对焦,从而不影响用户体验,而且可以实时校正由热失焦和震动等原因导致的失焦。
请参阅图6,图6是本申请提供的投影系统一实施例的结构示意图,投影系统包括:实时对焦装置61、投影装置62以及摄像装置63,投影装置62包括投影镜头621与投影屏幕622,摄像装置63可以为相机。
可在实时对焦流程开始之前对投影镜头621与摄像装置63的内外参标定,并将标定参数保存在实时对焦装置61中,该标定参数包括:投影装置62的畸变参数、摄像装置63的畸变参数、投影装置62和摄像装置63的相对位置与相对方向等信息。
在标定结束后,即可大致确认投影像素和相机像素的对应关系,但由于投影屏幕622的距离没有确定,标定了内外参后依然无法确定投影区域和照相区域的对应关系,即投影区域的四个角点在相机底片上的坐标,将在下文对如何确定对应关系进行描述。
对于固定安装或使用场景较为固定的场合,可以在安装完成后,先将黑场和白场的画面相减得到投影区域,并利用传统的角点检测或直线检测算法(如霍夫变换)加上聚类算法(如K均值聚类算法)获取四个角点的坐标;在获取到投影镜头621和相机的内外参与四个角点坐标后,就可以在去畸变后利用四点变换获取投影像素坐标与相机像素坐标的对应关系;也可以利用两个相反的特定横纵数量的黑白格画面相减,再利用角点检测、直线检测或聚类算法获取更加精确的对位关系。
对于使用场景更加复杂的情形,例如,投影镜头621与投影屏幕622的相对位置随时可能产生变化时,为了实现无感对焦,可以对相邻两帧视频源数据进行降采样并做差,统计每个区域的差异值大小,并将两帧差异较大的区域作为特征区域,同时对两帧视频源数据对应的实际显示画面做差,并且检测视频源数据中的特征区域在实际显示画面中的位置(如使用滑动窗口的互相关函数),即可确认投影像素和相机像素的对应关系。
在确定了投影像素和相机像素的对应关系后,即确认了投影镜头621与投影屏幕622的距离,根据投影镜头621的设计参数,即可大致确定对焦位置的参数;然而由于热失焦等原因,无法确定精确的对焦参数,因此可使用本实施例所提供的实时对焦装置61来进行调整,实现对焦。
实时对焦装置61用于获取视频源数据;投影装置62与实时对焦装置61连接,其用于接收实时对焦装置61发送的视频源数据,进行投影显示,形成投影显示图像;具体地,投影镜头621的对焦位置受到实时对焦装置61的控制,投影镜头621在接收到实时对焦装置61发送的视频源数据后,可实时投影出去,在投影屏幕622上形成拍摄图像。
摄像装置63与实时对焦装置61连接,其用于对投影装置62显示的投影显示图像进行拍摄,得到与视频源数据对应的拍摄图像;具体地,摄像装置63可在接收到实时对焦装置61发送的同步拍摄信号后,拍摄投影装 置62生成的投影显示图像,得到拍摄图像,并将该拍摄图像传回实时对焦装置61。
实时对焦装置61还用于对视频源数据与拍摄图像进行处理,得到视频源数据的特征信息与拍摄图像的特征信息;基于视频源数据的特征信息与拍摄图像的特征信息,产生控制指令,将控制指令发送至电机(图中未示出),以使得电机驱动投影镜头621移动,实现对焦;进一步地,可调整投影镜头621的位置或者投影镜头621的参数,本实施例仅以调整投影镜头621的位置为例进行说明。
在一具体的实施例中,如图6所示,投影系统还包括视频源64,视频源64用于发送视频源数据至同步模块6111;实时对焦装置61包括互相连接的处理器611与存储器612,处理器611用于接收视频源数据;存储器612用于接收处理器611发送的视频源数据,并对视频源数据进行存储,处理器611还可从存储器612中获取视频源数据以及接收摄像装置63发送的拍摄图像。
同步模块6111与存储器612以及视频源64连接,其用于接收视频源数据与摄像装置63发送的拍摄图像,将视频源数据存储至存储器612。
特征提取模块6112与同步模块6111连接,其用于对接收到的视频源数据与拍摄图像进行处理,得到视频源数据中的多个特征点与拍摄图像中的多个特征点。
对焦决策模块6113与特征提取模块6112连接,其用于基于视频源数据的多个特征点与拍摄图像的多个特征点,确定投影镜头621的调整方向,产生与调整方向对应的控制指令,将控制指令发送至电机。
进一步地,同步模块6111在控制投影装置62显示出视频源数据的同时,可对投影装置62投影的图像进行缓存,并在接收到摄像装置63拍摄的画面后,将对应的视频源数据从存储器612中取出,将拍摄图像与视频源数据一同送往特征提取模块6112,特征提取模块6112可以利用特定的图像处理算法或深度学习算法将输入的图像转化为与对焦强相关的特征信息,并将特征信息送入对焦决策模块6113,对焦决策模块6113可分析对焦情况并控制投影装置62进行对焦调节。
对焦决策模块6113可根据提取的对焦区域,分析如何完成对焦;具体地,先根据所得的特征点计算出视频源数据和拍摄图像的对焦区域,由于两组特征点在特征提取模块6112已经匹配过,所以可根据两组特征点的分布连接相邻的特征点,从两幅图像中得出近似的对焦区域;然后可以使用图像清晰度评价函数分别计算对焦区域的清晰度,根据清晰度的差距判断当前对焦是否精确,如果不精确则开始反馈调节,即控制投影镜头621向任意方向移动预设距离,并判断移动后的离焦情况,如果在完成上一次调节后离焦距离增大,则表示离焦方向与上一次的调节方向相反,否则离焦方向与上一次的调节方向相同;在判断了离焦方向后,可继续进行调节,直至离焦距离在可接受的范围以内。
本实施例提出一种可实时对焦的投影系统,该投影系统包含同步模块6111、特征提取模块6112和对焦决策模块6113,同步模块6111负责缓存视频源64发送的视频源数据,控制摄像装置63在正确的时间拍摄图像,生成拍摄图像,并将视频源数据和其对应的拍摄图像传递给特征提取模块6112;特征提取模块6112从两幅同步的图像中提取匹配出不依赖环境光的特征信息;对焦决策模块6113根据输入的特征信息计算出对焦区域,同时计算视频源数据中对焦区域的清晰度和拍摄图像中对焦区域的清晰度(锐利程度),从而判断投影镜头621的调整方向,输出控制指令至电机,电机可在控制指令的控制下带动投影镜头621移动,实现实时无感的对焦,能够解决投影装置62在播放过程中出现的热失焦以及其他失焦情况,并提供给使用者较好的观看体验。
请参阅图8,图8是本申请提供的计算机可读存储介质一实施例的结构示意图,计算机可读存储介质80用于存储计算机程序81,计算机程序81在被处理器执行时,用于实现上述实施例中的实时对焦方法。
计算机可读存储介质80可以是服务端、U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
在本申请所提供的几个实施方式中,应该理解到,所揭露的方法以及设备,可以通过其它的方式实现。例如,以上所描述的设备实施方式仅仅 是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。
另外,在本申请各个实施方式中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上仅为本申请的实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (12)

  1. 一种实时对焦方法,其特征在于,包括:
    获取视频源数据与拍摄图像;
    对所述视频源数据与所述拍摄图像进行处理,得到所述视频源数据的特征信息与所述拍摄图像的特征信息;
    基于所述视频源数据的特征信息与所述拍摄图像的特征信息,产生控制指令,将所述控制指令发送至电机,以使得所述电机驱动投影镜头移动,实现对焦。
  2. 根据权利要求1所述的实时对焦方法,其特征在于,所述基于所述视频源数据的特征信息与所述拍摄图像的特征信息,产生控制指令,将所述控制指令发送至电机,以使得所述电机驱动投影镜头移动的步骤,包括:
    基于所述视频源数据的特征信息与所述拍摄图像的特征信息,确定相应的对焦区域;
    获取所述视频源数据中对焦区域的清晰度,记作第一清晰度;
    获取所述拍摄图像中对焦区域的清晰度,记作第二清晰度;
    基于所述第一清晰度与所述第二清晰度,确定所述投影镜头的调整方向。
  3. 根据权利要求2所述的实时对焦方法,其特征在于,所述方法还包括:
    利用图像清晰度评价函数计算所述第一清晰度与所述第二清晰度;
    计算所述第一清晰度与所述第二清晰度的差值;
    判断所述第一清晰度与所述第二清晰度的差值是否大于预设离焦阈值;
    若是,则控制所述投影镜头向任意方向移动预设距离,返回所述获取视频源数据与拍摄图像的步骤,直至所述第一清晰度与所述第二清晰度的差值小于或等于所述预设离焦阈值;
    若否,则确定对焦成功;
    其中,所述预设距离小于所述预设离焦阈值。
  4. 根据权利要求2所述的实时对焦方法,其特征在于,所述特征信息包括多个特征点,所述基于所述视频源数据的特征信息与所述拍摄图像的特征信息,确定相应的对焦区域的步骤,包括:
    将所述视频源数据中的多个特征点与所述拍摄图像中的多个特征点进行匹配,确定相应的匹配区域;
    分别对每个所述匹配区域的像素进行处理,得到多个对焦特征点;
    分别将所述多个对焦特征点中相邻的对焦特征点进行连接,得到相应的所述对焦区域。
  5. 根据权利要求4所述的实时对焦方法,其特征在于,所述分别对每个所述匹配区域的像素进行处理,得到多个对焦特征点的步骤,包括:
    采用梯度算子或拉普拉斯算子分别对所述视频源数据中的匹配区域与所述拍摄图像中的匹配区域进行处理,得到多个高频像素点;
    判断每个所述高频像素点的像素值是否大于预设像素值;
    若是,则将所述高频像素点作为所述对焦特征点。
  6. 根据权利要求1所述的实时对焦方法,其特征在于,所述获取视频源数据与拍摄图像的步骤,包括:
    将所述视频源数据发送至投影装置以进行投影显示;
    发送同步拍摄信号至摄像装置,以控制所述摄像装置对投影屏幕上的投影显示图像进行拍摄,得到与所述视频源数据对应的拍摄图像。
  7. 一种实时对焦装置,其特征在于,包括互相连接的存储器和处理器,其中,所述存储器用于存储计算机程序,所述计算机程序在被所述处理器执行时,用于实现权利要求1-6中任一项所述的实时对焦方法。
  8. 一种投影系统,其特征在于,包括:
    实时对焦装置,用于接收视频源数据;
    投影装置,与所述实时对焦装置连接,用于接收所述实时对焦装置发送的所述视频源数据,进行投影显示,其中,所述投影装置包括互相连接的投影镜头与电机;
    摄像装置,与所述实时对焦装置连接,用于对所述投影装置显示的 投影显示图像进行拍摄,得到与所述视频源数据对应的所述拍摄图像;
    其中,所述实时对焦装置还用于对所述视频源数据与所述拍摄图像进行处理,得到所述视频源数据的特征信息与所述拍摄图像的特征信息;基于所述视频源数据的特征信息与所述拍摄图像的特征信息,产生控制指令,将所述控制指令发送至所述电机,以使得所述电机驱动所述投影镜头移动,实现对焦。
  9. 根据权利要求8所述的投影系统,其特征在于,所述实时对焦装置包括:
    处理器,用于接收所述视频源数据;
    存储器,与所述处理器连接,用于接收所述处理器发送的所述视频源数据,对所述视频源数据进行存储;
    其中,所述处理器还用于从所述存储器中获取所述视频源数据,以及接收所述摄像装置发送的所述拍摄图像。
  10. 根据权利要求9所述的投影系统,其特征在于,所述特征信息包括多个特征点,所述投影装置还包括投影屏幕,所述处理器包括:
    同步模块,与所述存储器连接,用于接收所述视频源数据与所述拍摄图像,将所述视频源数据存储至所述存储器;
    特征提取模块,与所述同步模块连接,用于对接收到的所述视频源数据与所述拍摄图像进行处理,得到所述视频源数据中的多个特征点与所述拍摄图像中的多个特征点;
    对焦决策模块,与所述特征提取模块连接,用于基于所述视频源数据的多个特征点与所述拍摄图像的多个特征点,确定所述投影镜头的调整方向,产生与所述调整方向对应的控制指令,将所述控制指令发送至所述电机。
  11. 根据权利要求10所述的投影系统,其特征在于,
    所述投影系统还包括视频源,所述视频源与所述同步模块连接,用于发送投影数据至所述同步模块,其中,所述投影数据包括至少一帧视频源数据。
  12. 一种计算机可读存储介质,用于存储计算机程序,其特征在于, 所述计算机程序在被处理器执行时,用于实现权利要求1-6中任一项所述的实时对焦方法。
PCT/CN2021/116773 2020-09-17 2021-09-06 一种实时对焦方法、装置、系统和计算机可读存储介质 WO2022057670A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010980932.2 2020-09-17
CN202010980932.2A CN114286064A (zh) 2020-09-17 2020-09-17 一种实时对焦方法、装置、系统和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2022057670A1 true WO2022057670A1 (zh) 2022-03-24

Family

ID=80776392

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116773 WO2022057670A1 (zh) 2020-09-17 2021-09-06 一种实时对焦方法、装置、系统和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN114286064A (zh)
WO (1) WO2022057670A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117191805A (zh) * 2023-10-26 2023-12-08 中导光电设备股份有限公司 一种aoi检测头自动调焦方法和系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666558B (zh) * 2022-04-13 2023-07-25 深圳市火乐科技发展有限公司 投影画面清晰度的检测方法、装置、存储介质及投影设备
CN114760415B (zh) * 2022-04-18 2024-02-02 上海千映智能科技有限公司 一种镜头调焦方法、系统、设备及介质
CN116095477B (zh) * 2022-08-16 2023-10-20 荣耀终端有限公司 对焦处理系统、方法、设备及存储介质
CN115361541B (zh) * 2022-10-20 2023-01-24 潍坊歌尔电子有限公司 投影机的投影内容录制方法、装置、投影机及存储介质
CN117319618B (zh) * 2023-11-28 2024-03-19 维亮(深圳)科技有限公司 一种用于清晰度评价的投影仪热跑焦判断方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080297668A1 (en) * 2007-05-29 2008-12-04 Konica Minolta Opto, Inc. Video projection device
CN101840055A (zh) * 2010-05-28 2010-09-22 浙江工业大学 基于嵌入式媒体处理器的视频自动聚焦系统
CN107942601A (zh) * 2017-12-25 2018-04-20 天津天地伟业电子工业制造有限公司 一种基于温度补偿的步进电机镜头聚焦方法
CN111050150A (zh) * 2019-12-24 2020-04-21 成都极米科技股份有限公司 焦距调节方法、装置、投影设备及存储介质
CN113132620A (zh) * 2019-12-31 2021-07-16 华为技术有限公司 一种图像拍摄方法及相关装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080297668A1 (en) * 2007-05-29 2008-12-04 Konica Minolta Opto, Inc. Video projection device
CN101840055A (zh) * 2010-05-28 2010-09-22 浙江工业大学 基于嵌入式媒体处理器的视频自动聚焦系统
CN107942601A (zh) * 2017-12-25 2018-04-20 天津天地伟业电子工业制造有限公司 一种基于温度补偿的步进电机镜头聚焦方法
CN111050150A (zh) * 2019-12-24 2020-04-21 成都极米科技股份有限公司 焦距调节方法、装置、投影设备及存储介质
CN113132620A (zh) * 2019-12-31 2021-07-16 华为技术有限公司 一种图像拍摄方法及相关装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117191805A (zh) * 2023-10-26 2023-12-08 中导光电设备股份有限公司 一种aoi检测头自动调焦方法和系统
CN117191805B (zh) * 2023-10-26 2024-04-26 中导光电设备股份有限公司 一种aoi检测头自动调焦方法和系统

Also Published As

Publication number Publication date
CN114286064A (zh) 2022-04-05

Similar Documents

Publication Publication Date Title
WO2022057670A1 (zh) 一种实时对焦方法、装置、系统和计算机可读存储介质
JP3528184B2 (ja) 画像信号の輝度補正装置及び輝度補正方法
US9307134B2 (en) Automatic setting of zoom, aperture and shutter speed based on scene depth map
CN109413336A (zh) 拍摄方法、装置、电子设备及计算机可读存储介质
US20170256036A1 (en) Automatic microlens array artifact correction for light-field images
JP5197279B2 (ja) コンピュータによって実施されるシーン内を移動している物体の3d位置を追跡する方法
CN105323465B (zh) 图像处理装置及其控制方法
US20110221920A1 (en) Digital photographing apparatus, method of controlling the same, and computer readable storage medium
US8754977B2 (en) Second camera for finding focal target in poorly exposed region of frame taken by first camera
WO2022037633A1 (zh) 双目摄像头的标定及图像矫正方法、装置、存储介质、终端、智能设备
CN108156369B (zh) 图像处理方法和装置
US10298853B2 (en) Image processing apparatus, method of controlling image processing apparatus, and imaging apparatus
JP2015501115A (ja) 時間的同期不整合を検出するためのビデオ処理装置および方法
CN110245549A (zh) 实时面部和对象操纵
US9094601B2 (en) Image capture device and audio hinting method thereof in focusing
CN115002433A (zh) 投影设备及roi特征区域选取方法
CN116320335A (zh) 一种投影设备及调整投影画面尺寸的方法
CN110730307A (zh) 一种摄像模组的调焦装置
CN108289170B (zh) 能够检测计量区域的拍照装置、方法及计算机可读介质
CN110677576B (zh) 一种摄像模组的调焦系统
US8982244B2 (en) Image capturing apparatus for luminance correction, a control method therefor, and a recording medium
JP2014126670A (ja) 撮像装置及びその制御方法、並びにプログラム
CN110661981B (zh) 一种对摄像模组的调焦系统进行远程管控的系统
KR20150000911A (ko) 투사면과 영상의 자동 맵핑 장치 및 방법
CN110730309A (zh) 一种摄像模组的调焦方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868495

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21868495

Country of ref document: EP

Kind code of ref document: A1