WO2020237441A1 - User equipment and method of oblique view correction - Google Patents

User equipment and method of oblique view correction Download PDF

Info

Publication number
WO2020237441A1
WO2020237441A1 PCT/CN2019/088417 CN2019088417W WO2020237441A1 WO 2020237441 A1 WO2020237441 A1 WO 2020237441A1 CN 2019088417 W CN2019088417 W CN 2019088417W WO 2020237441 A1 WO2020237441 A1 WO 2020237441A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
depth
data
module
focused
Prior art date
Application number
PCT/CN2019/088417
Other languages
French (fr)
Inventor
Hirotake Cho
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority to PCT/CN2019/088417 priority Critical patent/WO2020237441A1/en
Priority to CN201980096453.XA priority patent/CN113826376B/en
Priority to JP2021568633A priority patent/JP7346594B2/en
Publication of WO2020237441A1 publication Critical patent/WO2020237441A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/11Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/20Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/685Vibration or motion blur correction performed by mechanical compensation
    • H04N23/687Vibration or motion blur correction performed by mechanical compensation by shifting the lens or sensor position

Definitions

  • the present disclosure relates to the field of image processing technologies, and more particularly, to a user equipment (UE) , and a method of oblique view correction.
  • UE user equipment
  • FIG. 1 in current technology, if a user takes a picture of an oblique oriented plane surface having a clear contour using a user equipment 1, such as a cell phone, the user equipment 1 will correct an image of the oblique oriented plane surface to a shape without distortion.
  • the oblique oriented plane surface has an unclear contour, or only a part is in a shooting area 2, or the oblique oriented planer surface has a horizontal width greater than a width of the shooting area 2, the user cannot obtain a single focused image of a whole target without perspective distortion.
  • US patent no. 6449004B1 discloses an electronic camera with oblique view correction. It discloses an electronic camera with an image pickup device for photoelectrically picking up a light image of an object to generate image data.
  • An oblique angle information provider is configured for providing information on an oblique angle between a sensing surface of the image pickup device and a surface of the object.
  • a distance detector is configured for detecting a distance to the object.
  • a corrector is configured for correcting, based on the provided oblique angle information, and a detected distance, to generate image data, so as to produce a pseudo object image whose surface resides on a plane parallel with the sensing surface of the image pickup device.
  • US patent no. US7365301B2 discloses a three-dimensional shape detecting device, an image capturing device, and a three-dimensional shape detecting program. It discloses a three-dimensional shape detecting device comprising projection means which projects pattern light, image capturing means which captures a pattern light projection image of a subject on which the pattern light is projected, and a three-dimensional shape calculation means which calculates a three-dimensional shape of the subject based on a locus of the pattern light extracted from the pattern light projection image.
  • US patent application no. US7711259B2 discloses a method and an apparatus for increasing depth of field for an imager. It discloses that the imager captures a plurality of images at respective different focus positions, and combines the images into one image and sharpens the one image. In an alternative exemplary embodiment, a single image is captured while the focus positions change during image capture, and the resulting image is sharpened.
  • European patent application no. EP0908847A2 discloses an image synthesis apparatus and an image synthesis method. It discloses that an image synthesis apparatus employs a stored image information to generate coordinate transformation parameters that are used to set a positional relationship for selected images, changes the generated coordinate transformation parameters by using as a reference position for an arbitrary image, provides resultant coordinate transformation parameters as an image synthesis information, and synthesizes the images in accordance with the image synthesis information.
  • An object of the present disclosure is to propose a user equipment (UE) , and a method of oblique view correction for users able to obtain a single image without perspective distortion.
  • UE user equipment
  • a user equipment includes an image sensing module and a processor coupled to the image sensing module.
  • the processor is configured to control the image sensing module to capture a color image, an infrared (IR) image and a depth image, estimate plane parameters from the depth image, calculate focal distance data from the depth image, control the image sensing module to capture partially focused images at focal distances from the focal distance data, and cut focused image data from the partially focused images and compose these focused image data to form a wholly focused image.
  • IR infrared
  • the processor is configured to adjust the wholly focused image to a non-perspective image.
  • the method of adjusting the wholly focused image to a non-perspective image includes steps of estimating coordinate data of four corners of the wholly focused image on perspective coordinate axes calculated from the depth image and dragging the wholly focused image to form a non-perspective image on real world coordinate axes.
  • the processor is configured to compose several of the non-perspective images to form a single image.
  • the UE further includes a display module, and the processor is configured to set a trimming candidate frame on the single image shown on the display module.
  • the method of estimating plane parameters from the depth image includes a step of estimating a normal vector of a plane from the depth image.
  • the UE further includes an inertial measurement unit (IMU) .
  • the method of estimating plane parameters from the depth image further includes a step of estimating a perspective vertical coordinate axis and a perspective horizontal coordinate axis from data of IMU.
  • the method of calculating focal distance data from the depth image includes a step of determining several focal distances so that areas of depth of field at these focal distances are overlapping to cover whole of the color image.
  • the method of calculating focal distance data from the depth image further includes a step of determining whether each area of depth of field has texture.
  • the image sensing module includes a camera module for sensing color images, and a depth sensing module for sensing depth images.
  • the image sensing module further includes an image processor configured to control the camera module and the depth sensing module.
  • the camera module includes a lens module, an image sensor, an image sensor driver, a focus and an optical image stabilization (OIS) driver, a focus and OIS actuator and a gyro sensor.
  • the image sensor driver is configured to control the image sensor to capture images.
  • the focus and OIS driver is configured to control the focus and OIS actuator to focus the lens module and to move the lens module for compensating hand shock of human.
  • the gyro sensor is configured for providing motion data to the focus and OIS driver.
  • the depth sensing module includes a projector, a lens, a range sensor, and a range sensor driver.
  • the range sensor driver is configured to control the projector to project dot matrix pulse light, and control the range sensor to capture a reflecting dot matrix image focusing by the lens.
  • the UE further includes a memory configured to record programs, the image data, the plane parameters and a translation matrix.
  • the depth image includes point cloud data.
  • the UE further includes an input module configured to receive a human instruction, a codec configured to compress and decompress multimedia data, a speaker and a microphone connected to the codec, a wireless communication module configured to transmit and receive messages, and a global navigation satellite system (GNSS) module configured to provide positioning information.
  • GNSS global navigation satellite system
  • a method of oblique view correction includes capturing a color image, an infrared (IR) image and a depth image by an image sensing module, estimating plane parameters from the depth image, calculating focal distance data from the depth image, capturing partially focused images at these focal distances from the focal distance data by the image sensing module, and cutting focused image data from the partially focused images and composing these focused image data to form a wholly focused image.
  • IR infrared
  • the method of oblique view correction further includes a step of adjusting the wholly focused image to a non-perspective image.
  • the step of adjusting the wholly focused image to a non-perspective image further includes steps of estimating coordinate data of four corners of the wholly focused image on perspective coordinate axes calculated from the depth image and dragging the wholly focused image to form a non-perspective image on real world coordinate axes.
  • the method of oblique view correction further includes a step of composing several of the non-perspective images to form a single image.
  • the method of oblique view correction further includes a step of setting a trimming candidate frame on the single image shown on a display module.
  • the step of estimating plane parameters from the depth image includes a step of estimating a normal vector of a plane from the depth image.
  • the step of estimating plane parameters from the depth image further includes a step of estimating a perspective vertical coordinate axis and a perspective horizontal coordinate axis from data of IMU.
  • the step of calculating focal distance data from the depth image includes a step of determining several focal distances so that areas of depth of field at these focal distances are overlapping to cover whole of the color image.
  • the step of calculating focal distance data from the depth image further includes a step of determining whether each area of depth of field has texture.
  • the image sensing module includes a camera module for sensing color images, and a depth sensing module for sensing depth images.
  • the image sensing module further includes an image processor configured to control the camera module and the depth sensing module.
  • the camera module includes a lens module, an image sensor, an image sensor driver, a focus and an optical image stabilization (OIS) driver, a focus and OIS actuator and a gyro sensor.
  • the image sensor driver is configured to control the image sensor to capture images.
  • the focus and OIS driver is configured to control the focus and OIS actuator to focus the lens module and to move the lens module for compensating hand shock of human.
  • the gyro sensor is configured for providing motion data to the focus and OIS driver.
  • the depth sensing module includes a projector, a lens, a range sensor, and a range sensor driver.
  • the range sensor driver is configured to control the projector to project dot matrix pulse light, and control the range sensor to capture a reflecting dot matrix image focusing by the lens.
  • the method of oblique view correction further includes a step of providing a memory configured to record programs, image data, plane parameters and a translation matrix.
  • the depth image includes point cloud data.
  • the method of oblique view correction further includes a step of providing an input module configured to receive a human instruction, a codec configured to compress and decompress multimedia data, a speaker and a microphone connected to the codec, a wireless communication module configured to transmit and receive messages, and a global navigation satellite system (GNSS) module configured to provide positioning information.
  • GNSS global navigation satellite system
  • embodiments of the present invention provide a user equipment (UE) , and a method of oblique view correction for users able to obtain a single image without perspective distortion.
  • UE user equipment
  • FIG. 1 is a schematic application diagram of a prior art user equipment.
  • FIG. 2 is a schematic diagram of a user equipment (UE) according to an embodiment of the present disclosure.
  • FIG. 3 is a flowchart illustrating method of oblique view correction according to an embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating steps of estimating plane parameters from a depth image according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart illustrating steps of calculating focal distance data from the depth image according to an embodiment of the present disclosure.
  • FIG. 6 is a flowchart illustrating steps of adjusting a wholly focused image to a non-perspective image according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a step of capturing a color image, an infrared (IR) image, and a depth image according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of steps of estimating plane parameters from the depth image according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a step of calculating focal distance data from the depth image according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of describing relations between a focus position, a focus distance, depth of field (DOF) , and an area of DOF according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a step of capturing partially focused images at focal distances from the focal distance data according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram of a step of cutting focused image data from the partially focused images and composing these focused image data to form a wholly focused image according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of a step of adjusting the wholly focused image to a non-perspective image according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic diagram of a step of composing several of the non-perspective images to form a single image according to an embodiment of the present disclosure
  • FIG. 15 is a schematic diagram of a step of setting a trimming candidate frame on the single image shown on a display module according to an embodiment of the present disclosure
  • a user equipment (UE) 100 includes an image sensing module 10, and a processor 20 coupled to the image sensing module 10.
  • the processor 20 is configured to control the image sensing module 10 to capture a color image C_I, an infrared (IR) image IR_I, and a depth image D_I, estimate plane parameters from the depth image D_I, calculate focal distance data from the depth image D_I, control the image sensing module 10 to capture partially focused images PF_I at these focal distances from the focal distance data, and cut focused image data from the partially focused images PF_I and compose these focused image data to form a wholly focused image WF_I.
  • IR infrared
  • the processor 20 of the UE 100 is configured to control the image sensing module 10 to capture a color image C_I, an infrared (IR) image IR_I, and a depth image D_I for a left side of a target, and capture a color image C_I’, an infrared (IR) image IR_I’, and a depth image D_I’ a right side of a target for example.
  • IR infrared
  • D_I depth image
  • the method of estimating plane parameters from the depth image D_I includes a step of estimating a normal vector N_V of a plane from the depth image D_I.
  • the UE 100 estimates a normal vector N_V of a plane from the depth image D_I, and a normal vector N_V’ of a plane from the depth image D_I’.
  • the UE further includes an inertial measurement unit (IMU) 40.
  • the method of estimating plane parameters from the depth image D_I further includes a step of estimating a perspective vertical coordinate axis PV_CA, and a perspective horizontal coordinate axis PH_CA from data of IMU 40.
  • the UE 100 estimates a perspective vertical coordinate axis PV_CA, and a perspective horizontal coordinate axis PH_CA of depth image D_I from data of IMU 40 and estimates a perspective vertical coordinate axis PV_CA’, and a perspective horizontal coordinate axis PH_CA’ of depth image D_I’ from data of IMU 40.
  • the method of calculating focal distance data from the depth image D_I includes a step of determining several focal distances FD_1 to FD_4 so that areas of depth of field DF_A1 to DF_A4 corresponding to these focal distances FD_1 to FD_4 are overlapping to cover whole of the color image C_I.
  • a focus position F_1 of UE 100 has a focus distance FD_1, and depth of field DF_1.
  • An intersection area of depth of field DF_1 on a target is an area of depth of field DF_A1.
  • the area of depth of field DF_A1 can be calculated out of the depth image D_I data.
  • a focus position F_2 of UE 100 has a focus distance FD_2, and depth of field DF_2.
  • An intersection area of depth of field DF_2 on the target is an area of depth of field DF_A2.
  • a focus position F_3 of UE 100 has a focus distance FD_3, and depth of field DF_3.
  • An intersection area of depth of field DF_3 on the target is an area of depth of field DF_A3.
  • a focus position F_4 of UE 100 has a focus distance FD_4, and depth of field DF_4.
  • An intersection area of depth of field DF_4 on the target is an area of depth of field DF_A4.
  • the UE 100 determines several focal positions F_1 to F_4 so that areas of depth of field DF_A1 to DF_A4 corresponding to these focal positions F_1 to F_4 are overlapping to cover whole of the color image C_I.
  • the UE 100 determines several focal positions F_1’ to F_4’ so that areas of depth of field DF_A1’ to DF_A4’ corresponding to these focal positions F_1’ to F_4’ are overlapping to cover whole of the color image C_I’.
  • the method of calculating focal distance data from the depth image D_I further includes a step of determining whether each area of depth of field DF_A1 to DF_A4 has texture.
  • the processor 20 controls the image sensing module 10 to capture partially focused images PF_I2 at the focal distances FD_2 from the focal distance data, to capture partially focused images PF_I3 at the focal distances FD_3 from the focal distance data, cuts focused image data from the partially focused images PF_I2, and focused image data from the partially focused images PF_I3, and composes these focused image data to form a wholly focused image WF_I.
  • the processor 20 controls the image sensing module 10 to capture partially focused images PF_I2’, to capture partially focused images PF_I3’, cuts focused image data from the partially focused images PF_I2’, and focused image data from the partially focused images PF_I3’, and composes these focused image data to form a wholly focused image WF_I’.
  • the processor 20 is configured to adjust the wholly focused image WF_I to a non-perspective image NP_I.
  • the method of adjusting the wholly focused image WF_I to a non-perspective image NP_I includes steps of estimating coordinate data of four corners C1 to C4 of the wholly focused image WF_I on perspective coordinate axes P_CA calculated from the depth image D_I, and dragging the wholly focused image WF_I to form a non-perspective image NP_I on real world coordinate axes R_CA.
  • the UE 100 estimates coordinate data of four corners C1 to C4 of the wholly focused image WF_I on perspective coordinate axes P_CA calculated from the depth image D_I and then drags the wholly focused image WF_I to form a non-perspective image NP_I on real world coordinate axes R_CA.
  • the UE 100 provides a translation matrix from the perspective coordinate axes P_CA to the real world coordinate axes R_CA.
  • the wholly focused image WF_I is translated to the non-perspective image NP_I by multiplying data of the wholly focused image WF_I with the translation matrix.
  • the UE 100 estimates coordinate data of four corners C1’ to C4’ of the wholly focused image WF_I’ on perspective coordinate axes P_CA’ calculated from the depth image D_I’ and then drags the wholly focused image WF_I ‘to form a non-perspective image NP_I’ on real world coordinate axes R_CA.
  • the processor 20 is configured to compose several of the non-perspective images NP_I to form a single image S_I.
  • the processor 20 is configured to compose non-perspective images NP_I, and NP_I’ to form a single image S_I.
  • the UE 100 further includes a display module 30, and the processor 20 is configured to set a trimming candidate frame TC_F on the single image S_I shown on the display module 30.
  • the image sensing module 10 includes a camera module 11 for sensing color images C_I, and a depth sensing module 12 for sensing depth images D_I.
  • the image sensing module 10 further includes an image processor 13 configured to control the camera module 11, and the depth sensing module 12.
  • FIG. 15 the UE 100 further includes a display module 30, and the processor 20 is configured to set a trimming candidate frame TC_F on the single image S_I shown on the display module 30.
  • the image sensing module 10 includes a camera module 11 for sensing color images C_I, and a depth sensing module 12 for sensing depth images D_I.
  • the image sensing module 10 further includes an image processor 13 configured to control the camera module 11, and the depth sensing module 12.
  • the camera module 11 includes a lens module 111, an image sensor 112, an image sensor driver 113, a focus and an optical image stabilization (OIS) driver 114, a focus and OIS actuator 115, and a gyro sensor 116.
  • the image sensor driver 113 is configured to control the image sensor 112 to capture images.
  • the focus and OIS driver 114 is configured to control the focus and OIS actuator 115 to focus the lens module 111, and to move the lens module 111 for compensating hand shock of human.
  • the gyro sensor 116 is configured for providing motion data to the focus and OIS driver 114.
  • the depth sensing module 12 includes a projector 124, a lens 121, a range sensor 122, and a range sensor driver 123.
  • the range sensor driver 123 is configured to control the projector 124 to project dot matrix pulse light, and control the range sensor 122 to capture a reflecting dot matrix image focusing by the lens 121.
  • the UE 100 further includes a memory 50 configured to record programs, image data, the plane parameters, and the translation matrix.
  • the depth image D_I includes point cloud data.
  • FIG. 1 please refer to FIG.
  • the UE 100 further includes an input module 60 configured to receive a human instruction, a codec 70 configured to compress and decompress multimedia data, a speaker 80, and a microphone 90 connected to the codec 70, a wireless communication module 91 configured to transmit and receive messages, and a global navigation satellite system (GNSS) module 92 configured to provide positioning information.
  • GNSS global navigation satellite system
  • a method of oblique view correction includes: at block S100, capturing a color image C_I, an infrared (IR) image IR_I, and a depth image D_I by an image sensing module 10, at block S200, estimating plane parameters from the depth image D_I, at block 300, calculating focal distance data from the depth image D_I, at block 400, capturing partially focused images PF_I at these focal distances FD_1 to FD_4 from the focal distance data by the image sensing module 10, and at block 500, cutting focused image data from the partially focused images PF_I and composing these focused image data to form a wholly focused image WF_I.
  • IR infrared
  • the processor 20 of the UE 100 is configured to control the image sensing module 10 to capture a color image C_I, an infrared (IR) image IR_I, and a depth image D_I for a left side of a target, and capture a color image C_I’, an infrared (IR) image IR_I’, and a depth image D_I’ a right side of a target for example.
  • the step of estimating plane parameters from the depth image at block S200 includes a step of: at block S210, estimating a normal vector N_V of a plane from the depth image D_I.
  • the UE 100 estimates a normal vector N_V of a plane from the depth image D_I, and a normal vector N_V’ of a plane from the depth image D_I’.
  • the step of estimating plane parameters from the depth image at block S200 further includes a step of: at block S220, estimating a perspective vertical coordinate axis PV_CA, and a perspective horizontal coordinate axis PH_CA from data of IMU 40.
  • the UE 100 estimates a perspective vertical coordinate axis PV_CA, and a perspective horizontal coordinate axis PH_CA of depth image D_I from data of IMU 40 and estimates a perspective vertical coordinate axis PV_CA’, and a perspective horizontal coordinate axis PH_CA’ of depth image D_I’ from data of IMU 40.
  • the step of calculating focal distance data from the depth image at block S300 includes a step of: at block S310, determining several focal distances FD_1 to FD_4 so that areas of depth of field DF_A1 to DF_A4 corresponding to these focal distances FD_1 to FD_4 are overlapping to cover whole of the color image C_I.
  • a focus position F_1 of UE 100 has a focus distance FD_1, and depth of field DF_1.
  • An intersection area of depth of field DF_1 on a target is an area of depth of field DF_A1.
  • the area of depth of field DF_A1 can be calculated out of the depth image D_I data.
  • a focus position F_2 of UE 100 has a focus distance FD_2, and depth of field DF_2.
  • An intersection area of depth of field DF_2 on the target is an area of depth of field DF_A2.
  • a focus position F_3 of UE 100 has a focus distance FD_3, and depth of field DF_3.
  • An intersection area of depth of field DF_3 on the target is an area of depth of field DF_A3.
  • a focus position F_4 of UE 100 has a focus distance FD_4, and depth of field DF_4.
  • An intersection area of depth of field DF_4 on the target is an area of depth of field DF_A4.
  • the UE 100 determines several focal positions F_1 to F_4 so that areas of depth of field DF_A1 to DF_A4 corresponding to these focal positions F_1 to F_4 are overlapping to cover whole of the color image C_I.
  • the UE 100 determines several focal positions F_1’ to F_4’ so that areas of depth of field DF_A1’ to DF_A4’ corresponding to these focal positions F_1’ to F_4’ are overlapping to cover whole of the color image C_I’.
  • the step of calculating focal distance data from the depth image at block S300 further includes a step of: at block S320, determining whether each area of depth of field DF_A1 to DF_A4 has texture.
  • the processor 20 controls the image sensing module 10 to capture partially focused images PF_I2 at the focal distances FD_2 from the focal distance data, to capture partially focused images PF_I3 at the focal distances FD_3 from the focal distance data, cuts focused image data from the partially focused images PF_I2, and focused image data from the partially focused images PF_I3, and composes these focused image data to form a wholly focused image WF_I.
  • the processor 20 controls the image sensing module 10 to capture partially focused images PF_I2’, to capture partially focused images PF_I3’, cuts focused image data from the partially focused images PF_I2’, and focused image data from the partially focused images PF_I3’, and composes these focused image data to form a wholly focused image WF_I’.
  • the method of oblique view correction further includes a step of: at block S600, adjusting the wholly focused image WF_I to a non-perspective image NP_I. In some embodiments, please refer to FIGs.
  • the step of adjusting the wholly focused image WF_I to a non-perspective image NP_I at block S600 further includes steps of: at block S610, estimating coordinate data of four corners C1 to C4 of the wholly focused image WF_I on perspective coordinate axes P_CA calculated from the depth image D_I, and at block S620, dragging the wholly focused image WF_I to form a non-perspective image NP_I on real world coordinate axes R_CA.
  • the UE 100 estimates coordinate data of four corners C1 to C4 of the wholly focused image WF_I on perspective coordinate axes P_CA calculated from the depth image D_I and then drags the wholly focused image WF_I to form a non-perspective image NP_I on real world coordinate axes R_CA.
  • the UE 100 provides a translation matrix from the perspective coordinate axes P_CA to the real world coordinate axes R_CA.
  • the wholly focused image WF_I is translated to the non-perspective image NP_I by multiplying data of the wholly focused image WF_I with the translation matrix.
  • the UE 100 estimates coordinate data of four corners C1’ to C4’ of the wholly focused image WF_I’ on perspective coordinate axes P_CA’ calculated from the depth image D_I’ and then drags the wholly focused image WF_I ‘to form a non-perspective image NP_I’ on real world coordinate axes R_CA.
  • the method of oblique view correction further includes a step of: at block S700, composing several of the non-perspective images NP_I to form a single image S_I.
  • the processor 20 is configured to compose non-perspective images NP_I, and NP_I’ to form a single image S_I.
  • the method of oblique view correction further includes a step of: at block S800, setting a trimming candidate frame TC_F on the single image S_I shown on a display module 30.
  • the image sensing module 10 includes a camera module 11 for sensing color images C_I, and a depth sensing module 12 for sensing depth images D_I.
  • the image sensing module 10 further includes an image processor 13 configured to control the camera module 11, and the depth sensing module 12.
  • the camera module 11 includes a lens module 111, an image sensor 112, an image sensor driver 113, a focus and an optical image stabilization (OIS) driver 114, a focus and OIS actuator 115, and a gyro sensor 116.
  • the image sensor driver 113 is configured to control the image sensor 112 to capture images.
  • the focus and OIS driver 114 is configured to control the focus and OIS actuator 115 to focus the lens module 111, and to move the lens module 111 for compensating hand shock of human.
  • the gyro sensor 116 is configured for providing motion data to the focus and OIS driver 114.
  • FIG. 1 the camera module 11 includes a lens module 111, an image sensor 112, an image sensor driver 113, a focus and an optical image stabilization (OIS) driver 114, a focus and OIS actuator 115, and a gyro sensor 116.
  • the image sensor driver 113 is configured to control the image sensor 112 to capture images.
  • the depth sensing module 12 includes a projector 124, a lens 121, a range sensor 122, and a range sensor driver 123.
  • the range sensor driver 123 is configured to control the projector 124 to project dot matrix pulse light, and control the range sensor 122 to capture a reflecting dot matrix image focusing by the lens 121.
  • the method of oblique view correction further includes a step of providing a memory 50 configured to record programs, image data, plane parameters, and a translation matrix.
  • the depth image D_I includes point cloud data.
  • the method of oblique view correction further includes a step of providing an input module 60 configured to receive a human instruction, a codec 70 configured to compress and decompress multimedia data, a speaker 80, and a microphone 90 connected to the codec 70, a wireless communication module 91 configured to transmit and receive messages, and a global navigation satellite system (GNSS) module 92 configured to provide positioning information
  • GNSS global navigation satellite system
  • Benefits of the method of oblique view correction include: 1. providing a single, wholly focused image without perspective distortion. 2. providing a single picture of a target object with a horizontal width greater than a width of a shooting area of a camera.
  • the UE, and the method of oblique view correction are provided.
  • the method of an image sensor communication of the UE includes capturing a color image, an infrared image, and a depth image by an image sensing module, estimating plane parameters from the depth image, calculating focal distance data from the depth image, capturing partially focused images at these focal distances from the focal distance data by the image sensing module, and cutting focused image data from the partially focused images and composing these focused image data to form a wholly focused image, so as to provide a single focused image without perspective distortion.
  • the disclosed system, device, and method in the embodiments of the present disclosure can be realized with other ways.
  • the above-mentioned embodiments are exemplary only.
  • the division of the units is merely based on logical functions while other divisions exist in realization. It is possible that a plurality of units or components are combined or integrated in another system. It is also possible that some characteristics are omitted or skipped.
  • the displayed or discussed mutual coupling, direct coupling, or communicative coupling operate through some ports, devices, or units whether indirectly or communicatively by ways of electrical, mechanical, or other kinds of forms.
  • the units as separating components for explanation are or are not physically separated.
  • the units for display are or are not physical units, that is, located in one place or distributed on a plurality of network units. Some or all of the units are used according to the purposes of the embodiments. Moreover, each of the functional units in each of the embodiments can be integrated in one processing unit, physically independent, or integrated in one processing unit with two or more than two units. If the software function unit is realized and used and sold as a product, it can be stored in a readable storage medium in a computer. Based on this understanding, the technical plan proposed by the present disclosure can be essentially or partially realized as the form of a software product. Or, one part of the technical plan beneficial to the conventional technology can be realized as the form of a software product.
  • the software product in the computer is stored in a storage medium, including a plurality of commands for a computational device (such as a personal computer, a server, or a network device) to run all or some of the steps disclosed by the embodiments of the present disclosure.
  • the storage medium includes a USB disk, a mobile hard disk, a read-only memory (ROM) , a random-access memory (RAM) , a floppy disk, or other kinds of media capable of storing program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

A user equipment (UE) and a method of oblique view correction are provided. The method of oblique view correction includes capturing a color image, an infrared image, and a depth image by an image sensing module, estimating plane parameters from the depth image, calculating focal distance data from the depth image, capturing partially focused images at these focal distances from the focal distance data by the image sensing module, and cutting focused image data from the partially focused images and composing these focused image data to form a wholly focused image..

Description

USER EQUIPMENT AND METHOD OF OBLIQUE VIEW CORRECTION
BACKGROUND OF DISCLOSURE
1. Field of Disclosure
The present disclosure relates to the field of image processing technologies, and more particularly, to a user equipment (UE) , and a method of oblique view correction.
2. Description of Related Art
Please refer to FIG. 1; in current technology, if a user takes a picture of an oblique oriented plane surface having a clear contour using a user equipment 1, such as a cell phone, the user equipment 1 will correct an image of the oblique oriented plane surface to a shape without distortion. However, if the oblique oriented plane surface has an unclear contour, or only a part is in a shooting area 2, or the oblique oriented planer surface has a horizontal width greater than a width of the shooting area 2, the user cannot obtain a single focused image of a whole target without perspective distortion.
US patent no. 6449004B1 discloses an electronic camera with oblique view correction. It discloses an electronic camera with an image pickup device for photoelectrically picking up a light image of an object to generate image data. An oblique angle information provider is configured for providing information on an oblique angle between a sensing surface of the image pickup device and a surface of the object. A distance detector is configured for detecting a distance to the object. A corrector is configured for correcting, based on the provided oblique angle information, and a detected distance, to generate image data, so as to produce a pseudo object image whose surface resides on a plane parallel with the sensing surface of the image pickup device.
US patent no. US7365301B2 discloses a three-dimensional shape detecting device, an image capturing device, and a three-dimensional shape detecting program. It discloses a three-dimensional shape detecting device comprising projection means which projects pattern light, image capturing means which captures a pattern light projection image of a subject on which the pattern light is projected, and a three-dimensional shape calculation means which calculates a three-dimensional shape of the subject based on a locus of the pattern light extracted from the pattern light projection image.
US patent application no. US7711259B2 discloses a method and an apparatus for increasing depth of field for an imager. It discloses that the imager captures a plurality of images at respective different focus positions, and combines the images into one image and sharpens the one image. In an alternative exemplary embodiment, a single image is captured while the focus positions change during image capture, and the resulting image is sharpened.
European patent application no. EP0908847A2 discloses an image synthesis apparatus and an image synthesis method. It discloses that an image synthesis apparatus employs a stored image information to generate coordinate transformation parameters that are used to set a positional relationship for selected images, changes the generated coordinate transformation parameters by using as a reference position for an arbitrary image, provides resultant coordinate transformation parameters as an image synthesis information, and synthesizes the images in accordance with the image synthesis information.
There is no technology to obtain a single image without perspective distortion by a camera with a field of view narrower than a subject.
It would still be a need to provide a user equipment, and a method of oblique view correction for users being able to obtain a single image without perspective distortion.
SUMMARY
An object of the present disclosure is to propose a user equipment (UE) , and a method of oblique view correction for users able to obtain a single image without perspective distortion.
In a first aspect of the present disclosure, a user equipment (UE) includes an image sensing module and a processor coupled to the image sensing module. The processor is configured to control the image sensing module to capture a color image, an infrared (IR) image and a depth image, estimate plane parameters from the depth image, calculate focal distance data from the depth image, control the image sensing module to capture partially focused images at focal distances from the focal distance data, and cut focused image data from the partially focused images and compose these focused image data to form a wholly focused image.
In the embodiment of the present disclosure, the processor is configured to adjust the wholly focused image to a non-perspective image.
In the embodiment of the present disclosure, the method of adjusting the wholly focused image to a non-perspective image includes steps of estimating coordinate data of four corners of the wholly focused image on perspective coordinate axes calculated from the depth image and dragging the wholly focused image to form a non-perspective image on real world coordinate axes.
In the embodiment of the present disclosure, the processor is configured to compose several of the non-perspective images to form a single image.
In the embodiment of the present disclosure, the UE further includes a display module, and the processor is configured to set a trimming candidate frame on the single image shown on the display module.
In the embodiment of the present disclosure, the method of estimating plane parameters from the depth image includes a step of estimating a normal vector of a plane from the depth image.
In the embodiment of the present disclosure, the UE further includes an inertial measurement unit (IMU) . The method of estimating plane parameters from the depth image further includes a step of estimating a perspective vertical coordinate axis and a perspective horizontal coordinate axis from data of IMU.
In the embodiment of the present disclosure, the method of calculating focal distance data from the depth image includes a step of determining several focal distances so that areas of depth of field at these focal distances are overlapping to cover whole of the color image.
In the embodiment of the present disclosure, the method of calculating focal distance data from the depth image further includes a step of determining whether each area of depth of field has texture.
In the embodiment of the present disclosure, the image sensing module includes a camera module for sensing color images, and a depth sensing module for sensing depth images.
In the embodiment of the present disclosure, the image sensing module further includes an image processor configured to control the camera module and the depth sensing module.
In the embodiment of the present disclosure, the camera module includes a lens module, an image sensor, an image sensor driver, a focus and an optical image stabilization (OIS) driver, a focus and OIS actuator and a gyro sensor. The image sensor driver is configured to control the image sensor to capture images. The focus and OIS driver is configured to control the focus and OIS actuator to focus the lens module and to move the lens module for compensating hand shock of human. The gyro sensor is configured for providing motion data to the focus and OIS driver.
In the embodiment of the present disclosure, the depth sensing module includes a projector, a lens, a range sensor, and a range sensor driver. The range sensor driver is configured to control the projector to project dot matrix pulse light, and control the range sensor to capture a reflecting dot matrix image focusing by the lens.
In the embodiment of the present disclosure, the UE further includes a memory configured to record programs, the image data, the plane parameters and a translation matrix.
In the embodiment of the present disclosure, the depth image includes point cloud data.
In the embodiment of the present disclosure, the UE further includes an input module configured to receive a human instruction, a codec configured to compress and decompress multimedia data, a speaker and a microphone connected to the codec, a wireless communication module configured to transmit and receive messages, and a global navigation satellite system (GNSS) module configured to provide positioning information.
In a second aspect of the present disclosure, a method of oblique view correction includes capturing a color image, an infrared (IR) image and a depth image by an image sensing module, estimating plane parameters from the depth image, calculating focal distance data from the depth image, capturing partially focused images at these focal distances from the focal distance data by the image sensing module, and cutting focused image data from the partially focused images and composing these focused image data to form a wholly focused image.
In the embodiment of the present disclosure, the method of oblique view correction further includes a step of adjusting the wholly focused image to a non-perspective image.
In the embodiment of the present disclosure, the step of adjusting the wholly focused image to a non-perspective image further includes steps of estimating coordinate data of four corners of the wholly focused image on perspective coordinate axes calculated from the depth image and dragging the wholly focused image to form a non-perspective image on real world coordinate axes.
In the embodiment of the present disclosure, the method of oblique view correction further includes a step of composing several of the non-perspective images to form a single image.
In the embodiment of the present disclosure, the method of oblique view correction further includes a step of setting a trimming candidate frame on the single image shown on a display module.
In the embodiment of the present disclosure, the step of estimating plane parameters from the depth image includes a step of estimating a normal vector of a plane from the depth image.
In the embodiment of the present disclosure, the step of estimating plane parameters from the depth image further includes a step of estimating a perspective vertical coordinate axis and a perspective horizontal coordinate axis from data of IMU.
In the embodiment of the present disclosure, the step of calculating focal distance data from the depth image includes a step of determining several focal distances so that areas of depth of field at these focal distances are overlapping to cover whole of the color image.
In the embodiment of the present disclosure, the step of calculating focal distance data from the depth image further includes a step of determining whether each area of depth of field has texture.
In the embodiment of the present disclosure, the image sensing module includes a camera module for sensing color images, and a depth sensing module for sensing depth images.
In the embodiment of the present disclosure, the image sensing module further includes an image processor configured to control the camera module and the depth sensing module.
In the embodiment of the present disclosure, the camera module includes a lens module, an image sensor, an image sensor driver, a focus and an optical image stabilization (OIS) driver, a focus and OIS actuator and a gyro sensor. The image sensor driver is configured to control the image sensor to capture images. The focus and OIS driver is configured to control the focus and OIS actuator to focus the lens module and to move the lens module for compensating hand shock of human. The gyro sensor is configured for providing motion data to the focus and OIS driver.
In the embodiment of the present disclosure, the depth sensing module includes a projector, a lens, a range sensor, and a range sensor driver. The range sensor driver is configured to control the projector to project dot matrix pulse light, and control the range sensor to capture a reflecting dot matrix image focusing by the lens.
In the embodiment of the present disclosure, the method of oblique view correction further includes a step of providing a memory configured to record programs, image data, plane parameters and a translation matrix.
In the embodiment of the present disclosure, the depth image includes point cloud data.
In the embodiment of the present disclosure, the method of oblique view correction further includes a step of providing an input module configured to receive a human instruction, a codec configured to compress and decompress multimedia data, a speaker and a microphone connected to the codec, a wireless communication module configured to transmit and receive messages, and a global navigation satellite system (GNSS) module configured to provide positioning information.
Therefore, embodiments of the present invention provide a user equipment (UE) , and a method of oblique view correction for users able to obtain a single image without perspective distortion.
BRIEF DESCRIPTION OF DRAWINGS
In order to more clearly illustrate the embodiments of the present disclosure or related art, the following figures will be described in the embodiments are briefly introduced. It is obvious that the drawings are merely some embodiments of the present disclosure, a person having ordinary skill in this field can obtain other figures according to these figures without paying the premise.
FIG. 1 is a schematic application diagram of a prior art user equipment.
FIG. 2 is a schematic diagram of a user equipment (UE) according to an embodiment of the present disclosure.
FIG. 3 is a flowchart illustrating method of oblique view correction according to an embodiment of the present disclosure.
FIG. 4 is a flowchart illustrating steps of estimating plane parameters from a depth image according to an embodiment of the present disclosure.
FIG. 5 is a flowchart illustrating steps of calculating focal distance data from the depth image according to an embodiment of the present disclosure.
FIG. 6 is a flowchart illustrating steps of adjusting a wholly focused image to a non-perspective image according to an embodiment of the present disclosure.
FIG. 7 is a schematic diagram of a step of capturing a color image, an infrared (IR) image, and a depth image according to an embodiment of the present disclosure.
FIG. 8 is a schematic diagram of steps of estimating plane parameters from the depth image according to an embodiment of the present disclosure.
FIG. 9 is a schematic diagram of a step of calculating focal distance data from the depth image according to an embodiment of the present disclosure.
FIG. 10 is a schematic diagram of describing relations between a focus position, a focus distance, depth of field (DOF) , and an area of DOF according to an embodiment of the present disclosure.
FIG. 11 is a schematic diagram of a step of capturing partially focused images at focal distances from the focal distance data according to an embodiment of the present disclosure.
FIG. 12 is a schematic diagram of a step of cutting focused image data from the partially focused images and composing these focused image data to form a wholly focused image according to an embodiment of the present disclosure.
FIG. 13 is a schematic diagram of a step of adjusting the wholly focused image to a non-perspective image according to an embodiment of the present disclosure.
FIG. 14 is a schematic diagram of a step of composing several of the non-perspective images to form a single image according to an embodiment of the present disclosure
FIG. 15 is a schematic diagram of a step of setting a trimming candidate frame on the single image shown on a display module according to an embodiment of the present disclosure
DETAILED DESCRIPTION OF EMBODIMENTS
Embodiments of the present disclosure are described in detail with the technical matters, structural features, achieved objects, and effects with reference to the accompanying drawings as follows. Specifically, the terminologies in the embodiments of the present disclosure are merely for describing the purpose of the certain embodiment, but not to limit the disclosure.
Please refer to FIGs. 2, and 3; in some embodiments, a user equipment (UE) 100 includes an image sensing module 10, and a processor 20 coupled to the image sensing module 10. The processor 20 is configured to control the image sensing module 10 to capture a color image C_I, an infrared (IR) image IR_I,  and a depth image D_I, estimate plane parameters from the depth image D_I, calculate focal distance data from the depth image D_I, control the image sensing module 10 to capture partially focused images PF_I at these focal distances from the focal distance data, and cut focused image data from the partially focused images PF_I and compose these focused image data to form a wholly focused image WF_I.
In detail, please refer to FIG. 7. The processor 20 of the UE 100 is configured to control the image sensing module 10 to capture a color image C_I, an infrared (IR) image IR_I, and a depth image D_I for a left side of a target, and capture a color image C_I’, an infrared (IR) image IR_I’, and a depth image D_I’ a right side of a target for example.
In some embodiments, please refer to FIGs. 4, and 8; the method of estimating plane parameters from the depth image D_I includes a step of estimating a normal vector N_V of a plane from the depth image D_I.
In detail, please refer to FIG. 8; the UE 100 estimates a normal vector N_V of a plane from the depth image D_I, and a normal vector N_V’ of a plane from the depth image D_I’.
In some embodiments, please refer to FIGs. 2, 4, and 8; the UE further includes an inertial measurement unit (IMU) 40. The method of estimating plane parameters from the depth image D_I further includes a step of estimating a perspective vertical coordinate axis PV_CA, and a perspective horizontal coordinate axis PH_CA from data of IMU 40.
In detail, please refer to FIG. 8; the UE 100 estimates a perspective vertical coordinate axis PV_CA, and a perspective horizontal coordinate axis PH_CA of depth image D_I from data of IMU 40 and estimates a perspective vertical coordinate axis PV_CA’, and a perspective horizontal coordinate axis PH_CA’ of depth image D_I’ from data of IMU 40.
In some embodiments, please refer to FIGs. 5, and 9; the method of calculating focal distance data from the depth image D_I includes a step of determining several focal distances FD_1 to FD_4 so that areas of depth of field DF_A1 to DF_A4 corresponding to these focal distances FD_1 to FD_4 are overlapping to cover whole of the color image C_I.
In detail, please refer to FIGs. 9, and 10; a focus position F_1 of UE 100 has a focus distance FD_1, and depth of field DF_1. An intersection area of depth of field DF_1 on a target is an area of depth of field DF_A1. The area of depth of field DF_A1 can be calculated out of the depth image D_I data. Meanwhile, a focus position F_2 of UE 100 has a focus distance FD_2, and depth of field DF_2. An intersection area of depth of field DF_2 on the target is an area of depth of field DF_A2. A focus position F_3 of UE 100 has a focus distance FD_3, and depth of field DF_3. An intersection area of depth of field DF_3 on the target is an area of depth of field DF_A3. A focus position F_4 of UE 100 has a focus distance FD_4, and depth of field DF_4. An intersection area of depth of field DF_4 on the target is an area of depth of field DF_A4.
In detail, please refer to FIGs. 5, 9, and 10; the UE 100 determines several focal positions F_1 to F_4 so that areas of depth of field DF_A1 to DF_A4 corresponding to these focal positions F_1 to F_4 are overlapping to cover whole of the color image C_I. The UE 100 determines several focal positions F_1’ to F_4’ so that areas of depth of field DF_A1’ to DF_A4’ corresponding to these focal positions F_1’ to F_4’ are overlapping to cover whole of the color image C_I’.
In some embodiments, please refer to FIG. 5; the method of calculating focal distance data from the depth image D_I further includes a step of determining whether each area of depth of field DF_A1 to DF_A4 has texture.
In detail, please refer to FIGs. 3, 11, and 12; the processor 20 controls the image sensing module 10 to capture partially focused images PF_I2 at the focal distances FD_2 from the focal distance data, to capture partially focused images PF_I3 at the focal distances FD_3 from the focal distance data, cuts focused image data from the partially focused images PF_I2, and focused image data from the partially focused images PF_I3, and composes these focused image data to form a wholly focused image WF_I. The processor 20 controls the image sensing module 10 to capture partially focused images PF_I2’, to capture partially focused images PF_I3’, cuts focused image data from the partially focused images PF_I2’, and focused image data from the partially focused images PF_I3’, and composes these focused image data to form a wholly focused image WF_I’.
In some embodiments, please refer to FIG. 3; the processor 20 is configured to adjust the wholly focused image WF_I to a non-perspective image NP_I. In some embodiments, please refer to FIGs. 3, 6, and 13; the method of adjusting the wholly focused image WF_I to a non-perspective image NP_I includes steps of estimating coordinate data of four corners C1 to C4 of the wholly focused image WF_I on perspective coordinate axes P_CA calculated from the depth image D_I, and dragging the wholly focused image WF_I to form a non-perspective image NP_I on real world coordinate axes R_CA.
In detail, the UE 100 estimates coordinate data of four corners C1 to C4 of the wholly focused image WF_I on perspective coordinate axes P_CA calculated from the depth image D_I and then drags the wholly focused image WF_I to form a non-perspective image NP_I on real world coordinate axes R_CA. In detail, the UE 100 provides a translation matrix from the perspective coordinate axes P_CA to the real world coordinate axes R_CA. The wholly focused image WF_I is translated to the non-perspective image NP_I by multiplying data of the wholly focused image WF_I with the translation matrix. Meanwhile, the UE 100 estimates coordinate data of four corners C1’ to C4’ of the wholly focused image WF_I’ on perspective coordinate axes P_CA’ calculated from the depth image D_I’ and then drags the wholly focused image WF_I ‘to form a non-perspective image NP_I’ on real world coordinate axes R_CA.
In some embodiments, please refer to FIG. 14; the processor 20 is configured to compose several of the non-perspective images NP_I to form a single image S_I.
In detail, the processor 20 is configured to compose non-perspective images NP_I, and NP_I’ to form a single image S_I.
In some embodiments, please refer to FIG. 15; the UE 100 further includes a display module 30, and the processor 20 is configured to set a trimming candidate frame TC_F on the single image S_I shown on the display module 30. In some embodiments, please refer to FIG. 2; the image sensing module 10 includes a camera module 11 for sensing color images C_I, and a depth sensing module 12 for sensing depth images D_I. In some embodiments, please refer to FIG. 2; the image sensing module 10 further includes an image processor 13 configured to control the camera module 11, and the depth sensing module 12. In some embodiments, please refer to FIG. 2; the camera module 11 includes a lens module 111, an image sensor 112,  an image sensor driver 113, a focus and an optical image stabilization (OIS) driver 114, a focus and OIS actuator 115, and a gyro sensor 116. The image sensor driver 113 is configured to control the image sensor 112 to capture images. The focus and OIS driver 114 is configured to control the focus and OIS actuator 115 to focus the lens module 111, and to move the lens module 111 for compensating hand shock of human. The gyro sensor 116 is configured for providing motion data to the focus and OIS driver 114.
In some embodiments, please refer to FIG. 2; the depth sensing module 12 includes a projector 124, a lens 121, a range sensor 122, and a range sensor driver 123. The range sensor driver 123 is configured to control the projector 124 to project dot matrix pulse light, and control the range sensor 122 to capture a reflecting dot matrix image focusing by the lens 121. In some embodiments, please refer to FIG. 2; the UE 100 further includes a memory 50 configured to record programs, image data, the plane parameters, and the translation matrix. In some embodiments, the depth image D_I includes point cloud data. In some embodiments, please refer to FIG. 2; the UE 100 further includes an input module 60 configured to receive a human instruction, a codec 70 configured to compress and decompress multimedia data, a speaker 80, and a microphone 90 connected to the codec 70, a wireless communication module 91 configured to transmit and receive messages, and a global navigation satellite system (GNSS) module 92 configured to provide positioning information.
Further, please refer to FIG. 3; in some embodiments, a method of oblique view correction includes: at block S100, capturing a color image C_I, an infrared (IR) image IR_I, and a depth image D_I by an image sensing module 10, at block S200, estimating plane parameters from the depth image D_I, at block 300, calculating focal distance data from the depth image D_I, at block 400, capturing partially focused images PF_I at these focal distances FD_1 to FD_4 from the focal distance data by the image sensing module 10, and at block 500, cutting focused image data from the partially focused images PF_I and composing these focused image data to form a wholly focused image WF_I.
In detail, please refer to FIGs. 2, and 7; the processor 20 of the UE 100 is configured to control the image sensing module 10 to capture a color image C_I, an infrared (IR) image IR_I, and a depth image D_I for a left side of a target, and capture a color image C_I’, an infrared (IR) image IR_I’, and a depth image D_I’ a right side of a target for example.
In some embodiments, please refer to FIGs. 4, and 8; the step of estimating plane parameters from the depth image at block S200 includes a step of: at block S210, estimating a normal vector N_V of a plane from the depth image D_I.
In detail, please refer to FIG. 8; the UE 100 estimates a normal vector N_V of a plane from the depth image D_I, and a normal vector N_V’ of a plane from the depth image D_I’.
In some embodiments, please refer to FIGs. 2, 4, and 8; the step of estimating plane parameters from the depth image at block S200 further includes a step of: at block S220, estimating a perspective vertical coordinate axis PV_CA, and a perspective horizontal coordinate axis PH_CA from data of IMU 40.
In detail, please refer to FIG. 8; the UE 100 estimates a perspective vertical coordinate axis PV_CA, and a perspective horizontal coordinate axis PH_CA of depth image D_I from data of IMU 40 and estimates a  perspective vertical coordinate axis PV_CA’, and a perspective horizontal coordinate axis PH_CA’ of depth image D_I’ from data of IMU 40.
In some embodiments, please refer to FIGs. 5, and 9; the step of calculating focal distance data from the depth image at block S300 includes a step of: at block S310, determining several focal distances FD_1 to FD_4 so that areas of depth of field DF_A1 to DF_A4 corresponding to these focal distances FD_1 to FD_4 are overlapping to cover whole of the color image C_I.
In detail, please refer to FIGs. 9, and 10; a focus position F_1 of UE 100 has a focus distance FD_1, and depth of field DF_1. An intersection area of depth of field DF_1 on a target is an area of depth of field DF_A1. The area of depth of field DF_A1 can be calculated out of the depth image D_I data. Meanwhile, a focus position F_2 of UE 100 has a focus distance FD_2, and depth of field DF_2. An intersection area of depth of field DF_2 on the target is an area of depth of field DF_A2. A focus position F_3 of UE 100 has a focus distance FD_3, and depth of field DF_3. An intersection area of depth of field DF_3 on the target is an area of depth of field DF_A3. A focus position F_4 of UE 100 has a focus distance FD_4, and depth of field DF_4. An intersection area of depth of field DF_4 on the target is an area of depth of field DF_A4.
In detail, please refer to FIGs. 5, 9, and 10; the UE 100 determines several focal positions F_1 to F_4 so that areas of depth of field DF_A1 to DF_A4 corresponding to these focal positions F_1 to F_4 are overlapping to cover whole of the color image C_I. The UE 100 determines several focal positions F_1’ to F_4’ so that areas of depth of field DF_A1’ to DF_A4’ corresponding to these focal positions F_1’ to F_4’ are overlapping to cover whole of the color image C_I’.
In some embodiments, the step of calculating focal distance data from the depth image at block S300 further includes a step of: at block S320, determining whether each area of depth of field DF_A1 to DF_A4 has texture.
In detail, please refer to FIGs. 3, 11, and 12; the processor 20 controls the image sensing module 10 to capture partially focused images PF_I2 at the focal distances FD_2 from the focal distance data, to capture partially focused images PF_I3 at the focal distances FD_3 from the focal distance data, cuts focused image data from the partially focused images PF_I2, and focused image data from the partially focused images PF_I3, and composes these focused image data to form a wholly focused image WF_I. The processor 20 controls the image sensing module 10 to capture partially focused images PF_I2’, to capture partially focused images PF_I3’, cuts focused image data from the partially focused images PF_I2’, and focused image data from the partially focused images PF_I3’, and composes these focused image data to form a wholly focused image WF_I’.
In some embodiments, please refer to FIG. 3; the method of oblique view correction further includes a step of: at block S600, adjusting the wholly focused image WF_I to a non-perspective image NP_I. In some embodiments, please refer to FIGs. 3, 6, and 13; the step of adjusting the wholly focused image WF_I to a non-perspective image NP_I at block S600 further includes steps of: at block S610, estimating coordinate data of four corners C1 to C4 of the wholly focused image WF_I on perspective coordinate axes P_CA calculated from the depth image D_I, and at block S620, dragging the wholly focused image WF_I to form a non-perspective image NP_I on real world coordinate axes R_CA.
In detail, the UE 100 estimates coordinate data of four corners C1 to C4 of the wholly focused image WF_I on perspective coordinate axes P_CA calculated from the depth image D_I and then drags the wholly focused image WF_I to form a non-perspective image NP_I on real world coordinate axes R_CA. In detail, the UE 100 provides a translation matrix from the perspective coordinate axes P_CA to the real world coordinate axes R_CA. The wholly focused image WF_I is translated to the non-perspective image NP_I by multiplying data of the wholly focused image WF_I with the translation matrix. Meanwhile, the UE 100 estimates coordinate data of four corners C1’ to C4’ of the wholly focused image WF_I’ on perspective coordinate axes P_CA’ calculated from the depth image D_I’ and then drags the wholly focused image WF_I ‘to form a non-perspective image NP_I’ on real world coordinate axes R_CA. In some embodiments, please refer to FIG. 14; the method of oblique view correction further includes a step of: at block S700, composing several of the non-perspective images NP_I to form a single image S_I.
In detail, the processor 20 is configured to compose non-perspective images NP_I, and NP_I’ to form a single image S_I.
In some embodiments, please refer to FIG. 15; the method of oblique view correction further includes a step of: at block S800, setting a trimming candidate frame TC_F on the single image S_I shown on a display module 30. In some embodiments, please refer to FIG. 2; the image sensing module 10 includes a camera module 11 for sensing color images C_I, and a depth sensing module 12 for sensing depth images D_I. In some embodiments, please refer to FIG. 2; the image sensing module 10 further includes an image processor 13 configured to control the camera module 11, and the depth sensing module 12.
In some embodiments, please refer to FIG. 2; the camera module 11 includes a lens module 111, an image sensor 112, an image sensor driver 113, a focus and an optical image stabilization (OIS) driver 114, a focus and OIS actuator 115, and a gyro sensor 116. The image sensor driver 113 is configured to control the image sensor 112 to capture images. The focus and OIS driver 114 is configured to control the focus and OIS actuator 115 to focus the lens module 111, and to move the lens module 111 for compensating hand shock of human. The gyro sensor 116 is configured for providing motion data to the focus and OIS driver 114. In some embodiments, please refer to FIG. 2; the depth sensing module 12 includes a projector 124, a lens 121, a range sensor 122, and a range sensor driver 123. The range sensor driver 123 is configured to control the projector 124 to project dot matrix pulse light, and control the range sensor 122 to capture a reflecting dot matrix image focusing by the lens 121.
In some embodiments, please refer to FIG. 2; the method of oblique view correction further includes a step of providing a memory 50 configured to record programs, image data, plane parameters, and a translation matrix. In some embodiments, the depth image D_I includes point cloud data. In some embodiments, please refer to FIG. 2; the method of oblique view correction further includes a step of providing an input module 60 configured to receive a human instruction, a codec 70 configured to compress and decompress multimedia data, a speaker 80, and a microphone 90 connected to the codec 70, a wireless communication module 91 configured to transmit and receive messages, and a global navigation satellite system (GNSS) module 92 configured to provide positioning information
Benefits of the method of oblique view correction include: 1. providing a single, wholly focused image without perspective distortion. 2. providing a single picture of a target object with a horizontal width greater than a width of a shooting area of a camera.
In the embodiment of the present disclosure, the UE, and the method of oblique view correction are provided. The method of an image sensor communication of the UE includes capturing a color image, an infrared image, and a depth image by an image sensing module, estimating plane parameters from the depth image, calculating focal distance data from the depth image, capturing partially focused images at these focal distances from the focal distance data by the image sensing module, and cutting focused image data from the partially focused images and composing these focused image data to form a wholly focused image, so as to provide a single focused image without perspective distortion.
A person having ordinary skill in the art understands that each of the units, algorithm, and steps described and disclosed in the embodiments of the present disclosure are realized using electronic hardware or combinations of software for computers and electronic hardware. Whether the functions run in hardware or software depends on the condition of application and design requirement for a technical plan. A person having ordinary skill in the art can use different ways to realize the function for each specific application while such realizations should not go beyond the scope of the present disclosure. It is understood by a person having ordinary skill in the art that he/she can refer to the working processes of the system, device, and unit in the above-mentioned embodiment since the working processes of the above-mentioned system, device, and unit are basically the same. For easy description and simplicity, these working processes will not be detailed. It is understood that the disclosed system, device, and method in the embodiments of the present disclosure can be realized with other ways. The above-mentioned embodiments are exemplary only. The division of the units is merely based on logical functions while other divisions exist in realization. It is possible that a plurality of units or components are combined or integrated in another system. It is also possible that some characteristics are omitted or skipped. On the other hand, the displayed or discussed mutual coupling, direct coupling, or communicative coupling operate through some ports, devices, or units whether indirectly or communicatively by ways of electrical, mechanical, or other kinds of forms. The units as separating components for explanation are or are not physically separated. The units for display are or are not physical units, that is, located in one place or distributed on a plurality of network units. Some or all of the units are used according to the purposes of the embodiments. Moreover, each of the functional units in each of the embodiments can be integrated in one processing unit, physically independent, or integrated in one processing unit with two or more than two units. If the software function unit is realized and used and sold as a product, it can be stored in a readable storage medium in a computer. Based on this understanding, the technical plan proposed by the present disclosure can be essentially or partially realized as the form of a software product. Or, one part of the technical plan beneficial to the conventional technology can be realized as the form of a software product. The software product in the computer is stored in a storage medium, including a plurality of commands for a computational device (such as a personal computer, a server, or a network device) to run all or some of the steps disclosed by the embodiments of the present disclosure. The storage medium includes a USB disk, a mobile hard disk, a read-only memory (ROM) , a random-access memory (RAM) , a floppy disk, or other kinds of media capable of storing program codes.
While the present disclosure has been described in connection with what is considered the most practical and preferred embodiments, it is understood that the present disclosure is not limited to the disclosed embodiments but is intended to cover various arrangements made without departing from the scope of the broadest interpretation of the appended claims.

Claims (32)

  1. A user equipment (UE) , comprising:
    an image sensing module; and
    a processor coupled to the image sensing module,
    wherein the processor is configured to:
    control the image sensing module to capture a color image, an infrared (IR) image, and a depth image;
    estimate plane parameters from the depth image;
    calculate focal distance data from the depth image;
    control the image sensing module to capture partially focused images at these focal distances from the focal distance data; and
    cut focused image data from the partially focused images and compose these focused image data to form a wholly focused image.
  2. The UE of claim 1, wherein the processor is configured to adjust the wholly focused image to a non-perspective image.
  3. The UE of claim 2, wherein to adjust the wholly focused image to a non-perspective image comprises to estimate coordinate data of four corners of the wholly focused image on perspective coordinate axes calculated from the depth image, and to drag the wholly focused image to form a non-perspective image on real world coordinate axes.
  4. The UE of claim 2, wherein the processor is configured to compose several of the non-perspective images to form a single image.
  5. The UE of claim 4 further comprising a display module, and the processor is configured to set a trimming candidate frame on the single image shown on the display module.
  6. The UE of claim 1, wherein to estimate plane parameters from the depth image comprises to estimate a normal vector of a plane from the depth image.
  7. The UE of claim 6, further comprising an inertial measurement unit (IMU) , wherein to estimate plane parameters from the depth image further comprises to estimate a perspective vertical coordinate axis and a perspective horizontal coordinate axis from data of IMU.
  8. The UE of claim 1, wherein to calculate focal distance data from the depth image comprises to determine several focal distances so that areas of depth of field at these focal distances are overlapping to cover whole of the color image.
  9. The UE of claim 8, wherein to calculate focal distance data from the depth image further comprises to determine whether each area of depth of field has a texture.
  10. The UE of any one of claims 1 to 9, wherein the image sensing module comprises a camera module for sensing color images, and a depth sensing module for sensing depth images.
  11. The UE of claim 10, wherein the image sensing module further comprises an image processor configured to control the camera module, and the depth sensing module.
  12. The UE of claim 10, wherein the camera module comprises a lens module, an image sensor, an image sensor driver, a focus and an optical image stabilization (OIS) driver, a focus and OIS actuator, and a gyro  sensor, the image sensor driver is configured to control the image sensor to capture images, the focus and OIS driver is configured to control the focus and OIS actuator to focus the lens module, and to move the lens module for compensating hand shock of human, and the gyro sensor is configured for providing motion data to the focus and OIS driver.
  13. The UE of claim 10, wherein the depth sensing module comprises a projector, a lens, a range sensor, and a range sensor driver, the range sensor driver is configured to control the projector to project dot matrix pulse light, and control the range sensor to capture a reflecting dot matrix image focusing by the lens.
  14. The UE of any one of claims 1 to 13, further comprising a memory configured to record programs, image data, plane parameters, and a translation matrix.
  15. The UE of any one of claims 1 to 14, wherein the depth image comprises point cloud data.
  16. The UE of any one of claims 1 to 15, further comprising an input module configured to receive a human instruction, a codec configured to compress and decompress multimedia data, a speaker and a microphone connected to the codec, a wireless communication module configured to transmit and receive messages, and a global navigation satellite system (GNSS) module configured to provide positioning information.
  17. A method of oblique view correction, comprising:
    capturing a color image, an infrared (IR) image, and a depth image by an image sensing module;
    estimating plane parameters from the depth image;
    calculating focal distance data from the depth image;
    capturing partially focused images at these focal distances from the focal distance data by the image sensing module; and
    cutting focused image data from the partially focused images, and composing these focused image data to form a wholly focused image.
  18. The method of oblique view correction of claim 17, further comprising a step of adjusting the wholly focused image to a non-perspective image.
  19. The method of oblique view correction of claim 18, wherein the step of adjusting the wholly focused image to a non-perspective image further comprises steps of estimating coordinate data of four corners of the wholly focused image on perspective coordinate axes calculated from the depth image and dragging the wholly focused image to form a non-perspective image on real world coordinate axes.
  20. The method of oblique view correction of claim 18, further comprising a step of composing several of the non-perspective images to form a single image.
  21. The method of oblique view correction of claim 18, further comprising a step of setting a trimming candidate frame on the single image shown on a display module.
  22. The method of oblique view correction of claim 17, wherein the step of estimating plane parameters from the depth image comprises a step of estimating a normal vector of a plane from the depth image.
  23. The method of oblique view correction of claim 17, wherein the step of estimating plane parameters from the depth image comprises further comprises a step of estimating a perspective vertical coordinate axis, and a perspective horizontal coordinate axis from data of IMU.
  24. The method of oblique view correction of claim 17, wherein the step of calculating focal distance data from the depth image comprises a step of determining several focal distances so that areas of depth of field at these focal distances are overlapping to cover whole of the color image.
  25. The method of oblique view correction of claim 24, wherein the step of calculating focal distance data from the depth image further comprises a step of determining whether each area of depth of field has texture.
  26. The method of oblique view correction of any one of claims 17 to 25, wherein the image sensing module comprises a camera module for sensing color images, and a depth sensing module for sensing depth images.
  27. The method of oblique view correction of claim 26, wherein the image sensing module further comprises an image processor configured to control the camera module, and the depth sensing module.
  28. The method of oblique view correction of claim 26, wherein the camera module comprises a lens module, an image sensor, an image sensor driver, a focus and an optical image stabilization (OIS) driver, a focus and OIS actuator, and a gyro sensor, the image sensor driver is configured to control the image sensor to capture images, the focus and OIS driver is configured to control the focus and OIS actuator to focus the lens module and to move the lens module for compensating hand shock of humans, and the gyro sensor is configured for providing motion data to the focus and OIS driver.
  29. The method of oblique view correction of claim 26, wherein the depth sensing module comprises a projector, a lens, a range sensor, and a range sensor driver, the range sensor driver is configured to control the projector to project dot matrix pulse light, and control the range sensor to capture a reflecting dot matrix image focusing by the lens.
  30. The method of oblique view correction of any one of claims 17 to 29, further comprising a step of providing a memory configured to record programs, the image data, the plane parameters, and a translation matrix.
  31. The method of oblique view correction of any one of claims 17 to 30, wherein the depth image comprises point cloud data.
  32. The method of oblique view correction of any one of claims 17 to 31, further comprising a step of providing an input module configured to receive a human instruction, a codec configured to compress and decompress multimedia data, a speaker and a microphone connected to the codec, a wireless communication module configured to transmit and receive messages, and a global navigation satellite system (GNSS) module configured to provide positioning information.
PCT/CN2019/088417 2019-05-24 2019-05-24 User equipment and method of oblique view correction WO2020237441A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2019/088417 WO2020237441A1 (en) 2019-05-24 2019-05-24 User equipment and method of oblique view correction
CN201980096453.XA CN113826376B (en) 2019-05-24 2019-05-24 User equipment and strabismus correction method
JP2021568633A JP7346594B2 (en) 2019-05-24 2019-05-24 User equipment and strabismus correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/088417 WO2020237441A1 (en) 2019-05-24 2019-05-24 User equipment and method of oblique view correction

Publications (1)

Publication Number Publication Date
WO2020237441A1 true WO2020237441A1 (en) 2020-12-03

Family

ID=73553409

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088417 WO2020237441A1 (en) 2019-05-24 2019-05-24 User equipment and method of oblique view correction

Country Status (3)

Country Link
JP (1) JP7346594B2 (en)
CN (1) CN113826376B (en)
WO (1) WO2020237441A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605208A (en) * 2008-06-13 2009-12-16 富士胶片株式会社 Image processing equipment, imaging device, image processing method and program
US20170041585A1 (en) * 2015-08-06 2017-02-09 Intel Corporation Depth image enhancement for hardware generated depth images
CN106412426A (en) * 2016-09-24 2017-02-15 上海大学 Omni-focus photographing apparatus and method
CN107301665A (en) * 2017-05-03 2017-10-27 中国科学院计算技术研究所 Depth camera and its control method with varifocal optical camera
CN108833887A (en) * 2018-04-28 2018-11-16 Oppo广东移动通信有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3043034B2 (en) * 1990-07-26 2000-05-22 オリンパス光学工業株式会社 Image input / output device
JP3601272B2 (en) * 1997-11-10 2004-12-15 富士ゼロックス株式会社 Imaging device
JP2000236434A (en) * 1999-02-12 2000-08-29 Fuji Xerox Co Ltd Image forming device
EP2313847A4 (en) * 2008-08-19 2015-12-09 Digimarc Corp Methods and systems for content processing
JP4986189B2 (en) 2010-03-31 2012-07-25 カシオ計算機株式会社 Imaging apparatus and program
US8570320B2 (en) * 2011-01-31 2013-10-29 Microsoft Corporation Using a three-dimensional environment model in gameplay
CN103262524B (en) * 2011-06-09 2018-01-05 郑苍隆 Automatic focusedimage system
US8814362B2 (en) * 2011-12-09 2014-08-26 Steven Roger Verdooner Method for combining a plurality of eye images into a plenoptic multifocal image
US9241111B1 (en) * 2013-05-30 2016-01-19 Amazon Technologies, Inc. Array of cameras with various focal distances
CN103824303A (en) * 2014-03-14 2014-05-28 格科微电子(上海)有限公司 Image perspective distortion adjusting method and device based on position and direction of photographed object
CN106033614B (en) * 2015-03-20 2019-01-04 南京理工大学 A kind of mobile camera motion object detection method under strong parallax
CN104867113B (en) * 2015-03-31 2017-11-17 酷派软件技术(深圳)有限公司 The method and system of perspective image distortion correction
JP6522434B2 (en) * 2015-06-08 2019-05-29 オリンパス株式会社 Imaging device, image processing device, control method of imaging device, and image processing program
US10841491B2 (en) 2016-03-16 2020-11-17 Analog Devices, Inc. Reducing power consumption for time-of-flight depth imaging
CN109448045B (en) * 2018-10-23 2021-02-12 南京华捷艾米软件科技有限公司 SLAM-based planar polygon measurement method and machine-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101605208A (en) * 2008-06-13 2009-12-16 富士胶片株式会社 Image processing equipment, imaging device, image processing method and program
US20170041585A1 (en) * 2015-08-06 2017-02-09 Intel Corporation Depth image enhancement for hardware generated depth images
CN106412426A (en) * 2016-09-24 2017-02-15 上海大学 Omni-focus photographing apparatus and method
CN107301665A (en) * 2017-05-03 2017-10-27 中国科学院计算技术研究所 Depth camera and its control method with varifocal optical camera
CN108833887A (en) * 2018-04-28 2018-11-16 Oppo广东移动通信有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113826376A (en) 2021-12-21
JP2022533975A (en) 2022-07-27
JP7346594B2 (en) 2023-09-19
CN113826376B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
AU2014203801B2 (en) Image capture device having tilt and/or perspective correction
US11778403B2 (en) Personalized HRTFs via optical capture
JP5961945B2 (en) Image processing apparatus, projector and projector system having the image processing apparatus, image processing method, program thereof, and recording medium recording the program
CN108932051B (en) Augmented reality image processing method, apparatus and storage medium
WO2017020150A1 (en) Image processing method, device and camera
JP7548228B2 (en) Information processing device, information processing method, program, projection device, and information processing system
US10154241B2 (en) Depth map based perspective correction in digital photos
CN114600162A (en) Scene lock mode for capturing camera images
JP5857712B2 (en) Stereo image generation apparatus, stereo image generation method, and computer program for stereo image generation
JP6990694B2 (en) Projector, data creation method for mapping, program and projection mapping system
JP2014215755A (en) Image processing system, image processing apparatus, and image processing method
WO2020237441A1 (en) User equipment and method of oblique view correction
WO2021149509A1 (en) Imaging device, imaging method, and program
WO2021093804A1 (en) Omnidirectional stereo vision camera configuration system and camera configuration method
CN113747011A (en) Auxiliary shooting method and device, electronic equipment and medium
US20230244305A1 (en) Active interactive navigation system and active interactive navigation method
JP2021007231A (en) Information processing device, information processing system, and image processing method
CN115700764A (en) Control method, tracking system and non-transitory computer readable medium
CN118138732A (en) Trapezoidal correction method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931403

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021568633

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931403

Country of ref document: EP

Kind code of ref document: A1