WO2013174354A2 - 一种单摄像头测距的方法和系统 - Google Patents

一种单摄像头测距的方法和系统 Download PDF

Info

Publication number
WO2013174354A2
WO2013174354A2 PCT/CN2013/080563 CN2013080563W WO2013174354A2 WO 2013174354 A2 WO2013174354 A2 WO 2013174354A2 CN 2013080563 W CN2013080563 W CN 2013080563W WO 2013174354 A2 WO2013174354 A2 WO 2013174354A2
Authority
WO
WIPO (PCT)
Prior art keywords
mobile terminal
target object
camera
module
ranging
Prior art date
Application number
PCT/CN2013/080563
Other languages
English (en)
French (fr)
Other versions
WO2013174354A3 (zh
Inventor
曹恒
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to US14/648,690 priority Critical patent/US20150310619A1/en
Priority to EP13794121.7A priority patent/EP2927634B1/en
Publication of WO2013174354A2 publication Critical patent/WO2013174354A2/zh
Publication of WO2013174354A3 publication Critical patent/WO2013174354A3/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/14Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to the field of mobile terminal technologies, and in particular, to a method and system for single camera ranging.
  • Mobile terminals are mostly equipped with a Rear-Face Camera and a Face Camera, which are similar in shooting and imaging processes.
  • the optical image generated by the object to be photographed through the lens (Lens) is projected onto the surface of the image sensor, converted into an analog electrical signal, and then converted into a digital image signal by an analog-to-digital converter (Analog-Digital Converter), and then sent to the image.
  • the processing is processed in the Image Signal Processor, and finally stored in the memory and displayed on the screen of the mobile terminal through the scheduling of the Baseband Processor.
  • the dual camera method uses a dual camera to acquire an image of an object to be measured, and confirms the distance of the point based on parallax imaging in a dual camera based on a point on the object to be measured.
  • the single camera + laser head combination method receives the laser beam emitted by the laser head and processes it to obtain a corresponding distance.
  • the above two camera ranging methods require additional devices on the mobile terminal, such as adding a camera or a laser head, and also changing the structure and design of the mobile terminal.
  • Embodiments of the present invention provide a method and system for single camera ranging, which realize single camera distance measurement based on a mobile terminal without adding optical components.
  • the technical solution adopted by the present invention is that the method for measuring the single camera includes: displaying the input step: the camera of the mobile terminal collects a screen containing the target object and displays it on the screen, and receives the user's selection of the target object; tracking step: Identifying the tracking target object in the process of shifting the mobile terminal toward the target object; recording step: recording the distance of the translation of the mobile terminal and the display width of the target object before and after the movement of the mobile terminal; calculation step: data recorded based on the recording step , calculate the distance between the mobile terminal and the target object.
  • the method further includes: monitoring step: monitoring a posture of a process in which the mobile terminal is facing the target object, and if the mobile terminal detects that the mobile terminal has rotated, reporting the ranging failure, and if the rotation is not detected, continuing to execute Tracking steps.
  • the method further includes: a determining step: when the monitoring step detects that the mobile terminal has rotated, determining whether the rotation is legal, and if yes, continuing to perform the tracking step, otherwise reporting the ranging failure;
  • the rotation includes: for the mobile terminal whose camera is located at the center of the mobile terminal, the mobile terminal rotates around its center in its plane; for the mobile terminal where the camera is located at a position other than the mobile terminal center, the mobile terminal center and the target object When the center line is perpendicular to the plane of the mobile terminal, the mobile terminal rotates around its center in its plane.
  • the posture of the process in which the mobile terminal faces the target object is monitored by a three-axis gyroscope on the mobile terminal.
  • the distance that the mobile terminal translates is acquired by the acceleration sensor on the mobile terminal and recorded.
  • the present invention also provides a single camera ranging system, located on a mobile terminal, the system comprising: a display input module: configured to collect a screen containing a target object through a camera of the mobile terminal and display the screen on the screen; The selection of the object; the tracking module: configured to identify the tracking target object during the translation of the mobile terminal toward the target object; the recording module: configured to record the distance of the translation of the mobile terminal and the ratio of the display width of the target object before and after the movement of the mobile terminal; The calculation module is configured to calculate a distance between the mobile terminal and the target object based on the data recorded by the recording module.
  • the system further includes: a monitoring module: configured to monitor a posture of a process in which the mobile terminal is facing the target object, and if the mobile terminal detects that the mobile terminal has rotated, reporting the ranging failure, if no rotation is detected, The tracking module is continuously called to identify the tracking target object.
  • a monitoring module configured to monitor a posture of a process in which the mobile terminal is facing the target object, and if the mobile terminal detects that the mobile terminal has rotated, reporting the ranging failure, if no rotation is detected, The tracking module is continuously called to identify the tracking target object.
  • the system further includes: a determining module: configured to: when the monitoring module detects that the mobile terminal has rotated, determine whether the rotation is legal, and if yes, continue to invoke the tracking module to identify the tracking target object, otherwise Reporting the ranging failure; the legal rotation comprises: for the mobile terminal where the camera is located at the center of the mobile terminal, the mobile terminal rotates around its center in its plane; for the mobile terminal where the camera is located at a location other than the mobile terminal center When the connection between the center of the mobile terminal and the center of the target object is perpendicular to the plane of the mobile terminal, the mobile terminal rotates around its center in its plane.
  • a determining module configured to: when the monitoring module detects that the mobile terminal has rotated, determine whether the rotation is legal, and if yes, continue to invoke the tracking module to identify the tracking target object, otherwise Reporting the ranging failure; the legal rotation comprises: for the mobile terminal where the camera is located at the center of the mobile terminal, the mobile terminal rotates around its center in its plane; for the mobile terminal where the camera
  • the monitoring module is configured to monitor a posture of a process in which the mobile terminal faces the target object by a three-axis gyroscope on the mobile terminal.
  • the recording module is configured to acquire a distance translated by the mobile terminal and record by using an acceleration sensor on the mobile terminal.
  • the embodiment of the present invention has at least the following advantages:
  • the method and system for single camera ranging in the embodiment of the present invention in the camera photographing mode, the outer edge of the target object is continuously identified and tracked, and the user will move
  • the terminal is oriented toward the target object, and the mobile terminal calculates the distance between the mobile terminal and the target object according to the change of the display width of the target object on the screen or the change of the screen view width of the mobile terminal and the distance that the mobile terminal translates.
  • the whole ranging process is completed based on the existing image processing and motion sensing functions of the mobile terminal, and the single camera distance measurement can be realized based on the mobile terminal without adding the optical device.
  • Increasing the attitude monitoring step when the mobile terminal is panning can further ensure the accuracy of the ranging.
  • FIG. 1 is a flow chart of a method for ranging a single camera according to a first embodiment of the present invention
  • FIG. 2 is a flowchart of a method for ranging a single camera according to a second embodiment of the present invention
  • FIG. 3 is a third embodiment of the present invention
  • FIG. 4 is a schematic diagram of a system composition of a single camera ranging according to a fourth embodiment of the present invention
  • FIG. 5 is a schematic diagram of a system composition of a single camera ranging according to a fifth embodiment of the present invention
  • FIG. 7 is a schematic diagram of a system composition of a single camera ranging according to a sixth embodiment of the present invention
  • FIG. 7 is a schematic diagram showing changes in a correlation distance and a ratio of a camera of a mobile phone before and after translation of a target object in the application example of the present invention
  • FIG. 9 is a schematic diagram of a single camera ranging method according to an application example of the present invention, which is a schematic diagram of a change in size of a target object before and after translation of a camera facing a target object as seen from a screen viewing angle in the application example of the present invention
  • Flowchart FIG. 10 is a schematic diagram of a system composition of a single camera ranging according to an application example of the present invention.
  • the current mobile terminal is equipped with a device capable of accurately determining its own motion orientation, such as: Three-axis gyro, its maximum function is to measure the angular velocity of the X, Y, and x-axis in three-dimensional space, thereby determining the object's angular velocity. Movement state.
  • the mobile terminal is also equipped with a device capable of accurately measuring the acceleration of a moving object, such as an Accelerometer.
  • a first embodiment of the present invention includes the following specific steps: Step S101: A camera of a mobile terminal collects a screen containing a target object and displays it on a screen, and receives a user to target The choice of objects. Specifically, for a touch screen, the user can input the outline of the target object by clicking or drawing a line on the screen.
  • Step S102 identifying a tracking target object in a process in which the mobile terminal translates toward the target object.
  • the mobile terminal can identify and track the target object input in step S101 according to the related algorithm in the existing image processing, for example, when the brightness or color difference between the target object and the background is large, the image edge extraction can be adopted.
  • Algorithm specific, such as: adaptive threshold multi-scale edge extraction algorithm based on B-spline wavelet, multi-scale discrete Canny edge extraction algorithm combined with embedded credibility, new edge contour extraction model - quantum statistical deformable model image
  • the edge tracking algorithm can also use the image tracking algorithm based on particle filter, the multi-information particle filter tracking algorithm based on fusion structure information and scale invariant feature transform algorithm, and the improved Hausdorff video target tracking method to identify and track the target object.
  • Step S103 recording the ratio of the distance that the mobile terminal translates and the display width of the target object before and after the mobile terminal moves.
  • the ratio of the display width of the target object before and after the movement of the recorded mobile terminal may be replaced by the ratio of the screen framing width of the target object before and after the mobile terminal moves.
  • the target object is always within the viewing range of the mobile terminal screen.
  • the distance that the mobile terminal translates is acquired by the acceleration sensor on the mobile terminal and recorded.
  • Step S104 calculating a distance between the mobile terminal and the target object based on the data recorded in the recording step.
  • a second embodiment of the present invention a method for ranging a single camera, as shown in FIG. 2, steps S201, S203, and S204 of the method in this embodiment are respectively performed with steps S101 and S103 of the method in the first embodiment.
  • Step S104 is the same, the difference is that the embodiment further increases the step 205 of monitoring the posture of the process of moving the mobile terminal toward the target object while performing step S202.
  • the translation of the present invention is mainly relative to the rotation. If the rotation does not occur during the translation of the target object, the accuracy of the ranging result can be ensured, as follows: Step S205, monitoring the posture of the mobile terminal facing the target object translation process, if the mobile terminal is detected to rotate If the measurement fails, the process ends; if the rotation is not detected, then step S202 is continued.
  • a third embodiment of the present invention a method for ranging a single camera, as shown in FIG.
  • steps S301, S303, and S304 of the method in this embodiment are respectively performed with steps S101 and S103 of the method in the first embodiment.
  • S104 is the same, the difference is that in the step S302, the step S305 for monitoring the posture of the process of moving the target object toward the target object is added, and the step S306 is determined as follows: Step S305, facing the mobile terminal The posture of the process of shifting the target object is monitored. If it is detected that the mobile terminal has rotated, step S306 is performed; if the rotation is not detected, step S302 is continued. Specifically, the posture of the process in which the mobile terminal is translated toward the target object can be monitored by a three-axis gyroscope on the mobile terminal.
  • step S306 the mobile terminal can further determine whether the rotation is allowed according to the data, that is, step S306 is performed, because in actual application, the user holds the mobile terminal to perform the operation.
  • step S306 it is determined whether the rotation is legal. If yes, step 302 is continued. Otherwise, the ranging is failed, and the process ends.
  • the legal rotation includes: for the mobile terminal in which the camera is located at the center of the mobile terminal, the mobile terminal rotates around its center in its plane; and, for the mobile terminal where the camera is located at a location other than the center of the mobile terminal, moving When the connection between the terminal center and the center of the target object is perpendicular to the plane of the mobile terminal, the mobile terminal rotates around its center in its plane.
  • a fourth embodiment of the present invention is a single camera ranging system, which is located on a mobile terminal. As shown in FIG. 4, the system includes: The display input module 100 is configured to collect a screen containing the target object through the camera of the mobile terminal and display it on the screen; and receive the user's selection of the target object.
  • the display input module 100 can receive the outline of the target object input by the user on the screen by clicking or drawing a line.
  • the display input module 100 can also receive the outline of the target object that the user inputs through the keys on the keyboard.
  • the user only needs to click the approximate area where the target object is located by the display input module 100, and the mobile terminal can recognize the target object in the area or in the vicinity of the area.
  • the tracking module 200 is configured to identify the tracking target object during the translation of the mobile terminal toward the target object.
  • the tracking module 200 can identify and track the target object input in the display input module 100 according to the related algorithm in the existing image processing, for example, when the brightness or color difference between the target object and the background is large, Image edge extraction algorithm, such as: adaptive threshold multi-scale edge extraction algorithm based on B-spline wavelet, multi-scale discrete Canny edge extraction algorithm combined with embedded credibility, new edge contour extraction model quantum statistical deformable model
  • Image edge extraction algorithm such as: adaptive threshold multi-scale edge extraction algorithm based on B-spline wavelet, multi-scale discrete Canny edge extraction algorithm combined with embedded credibility, new edge contour extraction model quantum statistical deformable model
  • the image edge tracking algorithm can also use the image tracking algorithm based on particle filter, the multi-information particle filter tracking algorithm based on fusion structure information and scale invariant feature transform algorithm, and the improved hausdorff video target tracking method to identify the target object. track.
  • the recording module 300 is configured to record the ratio of the distance that the mobile terminal translates and the display width of the target object before and after the mobile terminal moves.
  • the ratio of the display width of the target object before and after the movement of the mobile terminal recorded by the recording module 300 can be replaced by the ratio of the screen framing width of the target object before and after the mobile terminal moves.
  • the target object is always within the viewing range of the screen of the mobile terminal.
  • the recording module 300 can acquire the distance translated by the mobile terminal and record by using an acceleration sensor provided on the mobile terminal.
  • the calculation module 400 is configured to calculate a distance between the mobile terminal and the target object based on the data recorded by the recording module.
  • a fifth embodiment of the present invention a method for ranging a single camera, as shown in FIG. 5, the display input module 100, the recording module 300, and the calculation module 400 of the system of the embodiment are the same as the corresponding modules in the fourth embodiment.
  • the difference is that, in the execution process of the tracking module 200, the monitoring module 500 for monitoring the posture of the process in which the mobile terminal faces the target object is added, and the accuracy of the ranging result can be ensured, as follows:
  • the monitoring module 500 is configured to monitor the posture of the process of moving the mobile terminal toward the target object, and if the mobile terminal detects that the mobile terminal has rotated, report the ranging failure, and if the rotation is not detected, continue to call the tracking module 200 to identify the tracking target. object.
  • a sixth embodiment of the present invention a method for ranging a single camera, as shown in FIG. 6, the display input module 100, the recording module 300, and the calculation module 400 of the system of the embodiment are the same as the corresponding modules in the fourth embodiment.
  • the difference is that, in the execution of the tracking module 200, the monitoring module 500 for monitoring the posture of the process in which the mobile terminal faces the target object is added, and the determining module 600 for performing the determination, as follows:
  • the monitoring module 500 The posture of the process of shifting the mobile terminal to the target object is monitored. If the movement of the mobile terminal is detected, the determination module 600 is invoked. If the rotation is not detected, the tracking module 200 is continuously called to identify the tracking target object.
  • the monitoring module 500 can monitor the posture of the process in which the mobile terminal is facing the target object through the three-axis gyroscope on the mobile terminal. If the three-axis gyroscope reports the rotation azimuth and angular velocity and the like, the determination module 600 can further determine whether the rotation is allowed according to the data, because in actual application, when the user holds the mobile terminal for ranging, It is easy to cause jitter and cause a slight rotation of the mobile terminal, but the rotation can be allowed as long as the result of the ranging is not affected.
  • the determining module 600 is configured to determine whether the rotation is legal, and if yes, continue to call the tracking module 200 to identify the tracking target object, otherwise the reporting ranging fails; preferably, the legal rotation comprises: moving the camera to the center of the mobile terminal a terminal, the mobile terminal rotates around its center in its plane; for a mobile terminal where the camera is located at a position other than the center of the mobile terminal, when the connection between the center of the mobile terminal and the center of the target object is perpendicular to the plane of the mobile terminal, the mobile terminal moves The rotation of the terminal about its center in its plane.
  • Figure 7 is a schematic diagram showing changes in the relative distance and ratio of the camera's camera before and after the camera is tilted toward the target object.
  • Figure 8 (a) and (b) are the front and rear of the camera facing the object to be measured, as seen from the screen viewing angle. Schematic diagram of changes in target object size and related scale changes.
  • the camera moves horizontally from the initial position A1 to position A2, so that the camera is
  • the position of the screen through the camera at the target object is changed from W1 to W2.
  • D1 or D2 is the distance from the target object to the camera. The following is an example of calculating D1.
  • the width of the target object occupies the ratio of the framing width of the screen at the target object.
  • K2 K2 , and 2 so set up, g ⁇ Ki- KS . That is, just need to know the ratio of Kl to K2, then you can calculate Dl.
  • the ratio of the width of the target object to the framing width of the screen at the target object is
  • the distance Dl of the camera from the target object can be obtained by the formula “ Ki — K2 '.
  • the user has a smart phone with a 4.5-inch HD (1280 x 720) resolution IPS capacitive touch screen, 8 million/1.3 million pixels.
  • Rear/front camera with three-axis gyroscope and acceleration sensor.
  • the user wants to know from the sofa where they are sitting to the TV set that is open in front of the front. The approximate linear distance.
  • the user first enters the camera's camera interface, uses the 8 megapixel camera to take a photo, and selects “Range Camera Mode” in the camera function menu.
  • the phone screen still displays the image captured by the camera in real time.
  • the smartphone activates and initializes the three-axis gyroscope and the acceleration sensor to sense the posture and movement of the mobile phone at this time. If the three-axis gyroscope and the acceleration sensor are not working properly during the startup or initialization process, the mobile phone screen prompts "the gesture or the motion sensing device fails to start", thereby exiting the ranging photo mode and entering the normal photographing mode. The phone screen will first prompt you to click on the outline of the target object that needs to be measured in the captured image. The user then clicks on the outline of the TV screen on the phone screen and confirms that the selection is complete.
  • the mobile phone Since the brightness of the TV screen is larger than that of the TV frame and the TV background wall, the mobile phone recognizes the outline of the TV screen that is easy to distinguish and track by calculating the image displayed on the screen. After the mobile phone finishes recognizing and tracking the outline of the TV screen, it will prompt the contour tracking on the screen. At the same time, the mobile phone calculates the proportion of the outline of the outline on the screen and the width of the screen of the mobile phone. If the mobile phone finds that the target object selected by the user cannot be identified and subsequently tracked, the contour recognition failure will be prompted on the screen, prompting the user to either exit the ranging photo mode or re-select the target object contour.
  • the mobile phone screen prompts the user to pan the mobile phone toward the target object, and simultaneously monitors the posture of the mobile phone by using a three-axis gyroscope, thereby ensuring that the user moves the mobile phone in parallel along the horizontal axis between the mobile phone and the target object and toward the target object. Since the mobile phone and the target object are perpendicular to the horizontal plane, the horizontal axis between the mobile phone and the target object is the connection between the center of the mobile phone and the center of the target object. If the mobile phone detects that the user does not move the mobile phone in parallel along the horizontal axis, it will prompt the location measurement failure on the screen and prompt to restart the ranging photo mode.
  • the mobile phone When the user pans the mobile phone, the mobile phone also keeps track of the contour of the target object. If the contour tracking fails, the contour tracking of the target object fails on the screen, and the ranging photo mode is exited.
  • the mobile phone When the user pans the mobile phone, the mobile phone will perform second integration on the acceleration data of the mobile phone collected by the acceleration sensor, thereby obtaining the translation distance of the mobile phone, and the mobile phone will also track the change of the outline of the television screen. When the user moves the phone to the TV screen for a short distance along the horizontal axis, it stops.
  • the mobile phone screen prompts that the ranging calculation is being performed, and at the same time, the mobile phone calculates the ratio of the display width of the TV screen outline on the screen to the width of the mobile phone screen in the current state. Finally, the mobile phone will calculate the distance between the display width of the TV screen before and after the movement and the width of the screen of the mobile phone, and the calculated translation distance of the mobile phone, and obtain the distance from the TV screen before the mobile phone moves.
  • the amount of distance the same reason, can also be derived from the distance from the TV screen after the mobile phone moves.
  • the mobile phone screen will continue to prompt that the ranging calculation is in progress, but after the calculation is completed, the mobile phone screen will prompt the calculation of the ranging, and display the calculation result of the initial position of the mobile phone or the distance between the current position and the TV screen. Calculation result display
  • the phone After 3 seconds, the phone automatically launches the range shooting mode and enters the normal camera mode.
  • the implementation system of the application example of the present invention will be further described below with reference to FIG. 10:
  • Image capture and processing module 10 that is, a smartphone camera lens, a lens, an image sensor, a digital-to-analog conversion, a digital image signal processor (ISP), and related mechanical devices, etc., for converting an image of a target object into a digital image Signal module;
  • ISP digital image signal processor
  • Acceleration sensing module 20 that is, an accelerometer of a smart phone and its mechanical device, a digital-to-analog converter, etc.;
  • Attitude sensing module 30 that is, a three-axis gyro instrument of a smart phone and a digital-to-analog converter thereof;
  • Image display and touch sensing module 40 that is, a display screen or a touch screen module of a smart phone, and a digital-to-analog converter thereof;
  • the application processing module 50 that is, the application processor chip of the smart phone, can process signals such as digital image signals, acceleration sensing signals, and attitude sensing signals, and complete recognition, tracking, and posture monitoring and displacement of the measured object. The calculation and the like, and can control the image display and the touch sensing module 40, output the prompt to the user, and handle the touch operation of the user.
  • the application processing module 50 is a core module of the system in the application example, and is capable of controlling the image capturing and processing module 10, the acceleration sensing module 20, the attitude sensing module 30, the image display and the touch sensing module 40, and receiving and processing The signals of the above four modules.
  • the image display and touch sensing module 40 can implement a function of capturing a screen containing a target object through a camera of the mobile terminal and displaying it on the screen, and receiving an input of the user's outline of the target object.
  • the interaction between the application processing module 50 and the image capturing and processing module 10 can realize the function of identifying the ratio of the tracking target object and the display width of the target object before and after the mobile terminal is moved.
  • the interaction between the application processing module 50 and the acceleration sensing module 20 can achieve recording.
  • the function of the attitude of the mobile terminal to perform the translation of the attitude of the mobile terminal to the target object is further improved.
  • the application processing module 50 can also process the data reported by the attitude sensing module 30. A function of judging whether the rotation of the mobile terminal is legal is realized.
  • the application processing module 50 calculates the distance between the mobile terminal and the target object based on the recorded data.
  • the application processing module 50 can be combined with the baseband processing chip of the smart phone, so that the baseband processing chip of the smart phone also has the relevant functions of the application processing module.
  • the method and system for single camera ranging according to the present invention the mobile terminal in the camera photographing mode, the outer edge of the target object is continuously identified and tracked, the user translates the mobile terminal toward the target object, and the mobile terminal displays the width on the screen according to the target object.
  • the change between the mobile terminal and the target object is calculated by the change of the screen view width of the mobile terminal and the distance of the translation of the mobile terminal.
  • the whole ranging process is completed based on the existing image processing and motion sensing functions of the mobile terminal, and the single camera distance measurement can be realized based on the mobile terminal without adding the optical device.
  • the step of monitoring the posture of the mobile terminal when shifting is added, and the accuracy of the ranging can be further ensured.
  • the technical means and functions of the present invention for achieving the intended purpose can be more deeply and specifically understood by the description of the specific embodiments.
  • the accompanying drawings are only for the purpose of illustration and description, and are not intended to limit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)
  • Image Analysis (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

本发明公开了一种单摄像头测距的方法及系统,移动终端在摄像头拍照模式下,目标物体的外边缘被持续识别跟踪,用户将移动终端面向目标物体平移,移动终端根据目标物体在屏幕上显示宽度的变化或者移动终端屏幕取景宽度的变化、以及移动终端平移的距离,计算出移动终端与目标物体间的距离。整个测距的过程基于现有的移动终端图像处理及运动感知功能来完成,可以在不增配光学器件的情况下基于移动终端实现单摄像头距离测量。增加对移动终端平移时的姿态监控步骤,可以进一步保证测距的准确性。

Description

一种单摄像头测距的方法和系统 技术领域 本发明涉及移动终端技术领域, 尤其涉及一种单摄像头测距的方法及系统。 背景技术 移动终端大多配备后置摄像头(Rear-Face Camera)和前置摄像头(Face Camera), 二者拍摄及成像过程相似。 要拍摄的物体透过镜头 (Lens) 而生成的光学图像投射到 图像传感器表面上, 转化为模拟电信号, 再经过模数转换芯片 (Analog-Digital Converter)转换为数字图像信号后, 送到图像信号处理芯片 (Image Signal Processor) 中加工处理, 最后经过基带芯片 (Baseband Processor) 的调度而存储在存储器及显示 在移动终端屏幕上。 利用摄像头测距的方法, 目前主要有双摄像头法, 以及单摄像头 +激光头组合法。 双摄像头法, 是利用双摄像头采集待测物体的图像, 并根据待测物体上的一点在双摄 像头中的视差成像确认该点距离。单摄像头 +激光头组合法,是通过接收激光头发射的 激光束, 并对其进行处理进而得到相应的距离。 以上两种摄像头测距方法, 在移动终 端上应用均需要增配器件, 如增加一个摄像头或激光头等,还需要改动移动终端结构、 外观设计。 发明内容 本发明实施例提供了一种单摄像头测距的方法及系统, 在不增配光学器件的情况 下基于移动终端实现单摄像头距离测量。 本发明采用的技术方案是, 所述单摄像头测距的方法, 包括: 显示输入步骤: 移 动终端的摄像头采集含有目标物体的画面并显示在屏幕上, 接收用户对目标物体的选 择; 跟踪步骤: 在移动终端面向目标物体平移的过程中识别跟踪目标物体; 记录步骤: 记录移动终端平移的距离以及移动终端移动前、 后目标物体的显示宽度之比; 计算步 骤: 基于所述记录步骤记录的数据, 计算移动终端与目标物体之间的距离。 优选地, 所述方法还包括: 监测步骤: 对移动终端面向目标物体平移的过程的姿 态进行监测, 若监测到移动终端发生了转动, 则上报测距失败, 若未监测到转动, 则 继续执行跟踪步骤。 优选的, 所述方法还包括: 判断步骤: 当所述监测步骤监测到移动终端发生了转 动时, 判断所述转动是否合法, 若是, 则继续执行跟踪步骤, 否则上报测距失败; 所 述合法的转动, 包括: 对于摄像头位于移动终端中心的移动终端, 移动终端在其所在 平面内绕其中心的转动;对于摄像头位于除移动终端中心之外的其他位置的移动终端, 移动终端中心与目标物体中心的连线垂直于移动终端所在平面时, 移动终端在其所在 平面内绕其中心的转动。 优选的, 在所述监测步骤中, 通过移动终端上的三轴陀螺仪对移动终端面向目标 物体平移的过程的姿态进行监测。 优选的, 在所述记录步骤中, 通过移动终端上的加速度感应器获取移动终端平移 的距离并记录。 本发明还提供一种单摄像头测距的系统, 位于移动终端上, 所述系统包括: 显示 输入模块: 设置为通过移动终端的摄像头采集含有目标物体的画面并显示在屏幕上; 接收用户对目标物体的选择; 跟踪模块: 设置为在移动终端面向目标物体平移的过程 中识别跟踪目标物体; 记录模块: 设置为记录移动终端平移的距离以及移动终端移动 前、 后目标物体的显示宽度之比; 计算模块: 设置为基于所述记录模块记录的数据, 计算移动终端与目标物体之间的距离。 优选的, 所述系统还包括: 监测模块: 设置为对移动终端面向目标物体平移的过 程的姿态进行监测, 若监测到移动终端发生了转动, 则上报测距失败, 若未监测到转 动, 则继续调用所述跟踪模块识别跟踪目标物体。 优选的, 所述系统还包括: 判断模块: 设置为当所述监测模块监测到移动终端发 生了转动时, 判断所述转动是否合法, 若是, 则继续调用所述跟踪模块识别跟踪目标 物体, 否则上报测距失败; 所述合法的转动包括: 对于摄像头位于移动终端中心的移 动终端, 移动终端在其所在平面内绕其中心的转动; 对于摄像头位于除移动终端中心 之外的其他位置的移动终端, 移动终端中心与目标物体中心的连线垂直于移动终端所 在平面时, 移动终端在其所在平面内绕其中心的转动。 优选的, 所述监测模块: 设置为通过移动终端上的三轴陀螺仪对移动终端面向目 标物体平移的过程的姿态进行监测。 优选的, 所述记录模块: 设置为通过移动终端上的加速度感应器获取移动终端平 移的距离并记录。 采用上述技术方案, 本发明实施例至少具有下列优点: 本发明实施例所述单摄像 头测距的方法及系统, 移动终端在摄像头拍照模式下, 目标物体的外边缘被持续识别 跟踪, 用户将移动终端面向目标物体平移, 移动终端根据目标物体在屏幕上显示宽度 的变化或者移动终端屏幕取景宽度的变化、 以及移动终端平移的距离, 计算出移动终 端与目标物体间的距离。 整个测距的过程基于现有的移动终端图像处理及运动感知功 能来完成, 可以在不增配光学器件的情况下基于移动终端实现单摄像头距离测量。 增 加对移动终端平移时的姿态监控步骤, 可以进一步保证测距的准确性。 附图说明 图 1 为本发明第一实施例的单摄像头测距的方法流程图; 图 2 为本发明第二实施例的单摄像头测距的方法流程图; 图 3 为本发明第三实施例的单摄像头测距的方法流程图; 图 4 为本发明第四实施例的单摄像头测距的系统组成示意图; 图 5 为本发明第五实施例的单摄像头测距的系统组成示意图; 图 6 为本发明第六实施例的单摄像头测距的系统组成示意图; 图 7 为本发明应用实例中手机的摄像头面向目标物体平移前、 后的相关距离、 比 例的变化示意图; 图 8 (a)、 (b) 分别为本发明应用实例中从屏幕观察角度看到的摄像头面向目标 物体平移前、 后的目标物体大小变化及相关比例变化示意图; 图 9 为本发明应用实例的单摄像头测距的方法流程图; 图 10 为本发明应用实例的单摄像头测距的系统组成示意图。 具体实施方式 为更进一步阐述本发明为达成预定目的所采取的技术手段及功效, 以下结合附图 及较佳实施例, 对本发明进行详细说明如后。 目前的移动终端上配备有能够精确确定自身运动方位的器件, 比如: 三轴陀螺仪 (Three-axis gyro), 它最大的作用就是测量三维空间 X、 Y、 Ζ轴的角速度, 从而判定 物体的运动状态。 移动终端上还配备有能够精确测量运动物体加速度的器件, 比如: 加速度感应器 (Accelerometer), 当加速度感应器作加速运动时, 其内部的质量块受到惯性力的作用 向相反的方向运动, 质量块发生的位移受到弹簧和阻尼器的限制, 通过输出电压就能 测得外界的加速度大小。 另外, 加速度的二重积分就是位移, 因此利用加速度传感器 可以实现对位移的测量。 本发明第一实施例, 一种单摄像头测距的方法, 如图 1所示, 包括以下具体步骤: 步骤 S101, 移动终端的摄像头采集含有目标物体的画面并显示在屏幕上, 接收用 户对目标物体的选择。 具体的, 对于具有触摸功能的屏幕, 用户可以通过在屏幕上点选或者画线来输入 目标物体的轮廓。 对于具有普通屏幕的移动终端, 用户也可以通过键盘上的按键来输 入目标物体的轮廓。 或者, 对于能够自动识别屏幕上的目标物体的移动终端, 用户只 需点击目标物体所在的大致区域, 移动终端即能识别出该区域中或者该区域附近的目 标物体。 步骤 S102, 在移动终端面向目标物体平移的过程中识别跟踪目标物体。 具体的,移动终端可以根据现有的图像处理中的相关算法对步骤 S101中输入的目 标物体进行识别和跟踪, 比如: 利用目标物体与背景的亮度或者颜色差异较大时, 可 以采用图像边缘提取算法, 具体的, 如: 基于 B样条小波的自适应阈值多尺度边缘提 取算法、 结合嵌入可信度的多尺度离散 Canny边缘提取算法、 新的边缘轮廓提取模 型一一量子统计可变形模型图像边缘跟踪算法, 还可以采用基于粒子滤波的图像跟踪 算法、 融合结构信息和尺度不变特征变换算法的多信息融合粒子滤波跟踪算法、 改进 的 hausdorff视频目标跟踪方法等算法对目标物体进行识别和跟踪。 步骤 S103, 记录移动终端平移的距离以及移动终端移动前、 后目标物体的显示宽 度之比。 具体的, 还可以将记录的移动终端移动前、后目标物体的显示宽度之比, 替换为, 移动终端移动前、 后在目标物体处的屏幕取景宽度之比。 移动终端平移的过程中, 目 标物体始终在移动终端屏幕的取景范围内。 另外, 通过移动终端上的加速度感应器获 取移动终端平移的距离并记录。 步骤 S104,基于所述记录步骤记录的数据,计算移动终端与目标物体之间的距离。 本发明第二实施例, 一种单摄像头测距的方法, 如图 2所示, 本实施例所述方法 的步骤 S201、 S203、 S204分别与第一实施例中所述方法的步骤 S101、 S103、 S104相 同, 区别在于,本实施例在执行步骤 S202的同时还增加了对移动终端面向目标物体平 移的过程的姿态进行监测的步骤 205, 本发明的所述平移主要是相对于转动而言的, 在平移目标物体的过程中如果不发生转动, 可以保证测距结果的准确性, 具体如下: 步骤 S205, 对移动终端面向目标物体平移的过程的姿态进行监测, 若监测到移动 终端发生了转动,则上报测距失败,流程结束;若未监测到转动,则继续执行步骤 S202。 本发明第三实施例, 一种单摄像头测距的方法, 如图 3所示, 本实施例所述方法 的步骤 S301、 S303、 S304分别与第一实施例中所述方法的步骤 S101、 S103、 S104相 同, 区别在于,本实施例在执行步骤 S302的同时还增加了对移动终端面向目标物体平 移的过程的姿态进行监测的步骤 S305以及判断的步骤 S306, 如下: 步骤 S305, 对移动终端面向目标物体平移的过程的姿态进行监测, 若监测到移动 终端发生了转动, 则执行步骤 S306; 若未监测到转动, 则继续执行步骤 S302。 具体的, 可以通过移动终端上的三轴陀螺仪对移动终端面向目标物体平移的过程 的姿态进行监测。 若移动终端发生转动时, 三轴陀螺仪上报转动的方位和角速度等数 据, 移动终端可以根据这些数据进一步判断该转动是否被允许, 即执行步骤 S306, 因 为在实际应用中, 用户手持移动终端进行测距时, 很容易发生抖动而导致移动终端发 生轻微转动, 但只要不影响测距的结果, 该转动是可以被允许的。 步骤 S306, 判断所述转动是否合法, 若是, 则继续执行步骤 302, 否则上报测距 失败, 流程结束。 具体的, 合法的转动包括: 对于摄像头位于移动终端中心的移动终端, 移动终端 在其所在平面内绕其中心的转动; 以及, 对于摄像头位于除移动终端中心之外的其他 位置的移动终端, 移动终端中心与目标物体中心的连线垂直于移动终端所在平面时, 移动终端在其所在平面内绕其中心的转动。 本发明第四实施例, 一种单摄像头测距的系统, 位于移动终端上, 如图 4所示, 该系统包括: 显示输入模块 100: 设置为通过移动终端的摄像头采集含有目标物体的画面并显 示在屏幕上; 接收用户对目标物体的选择。 具体的, 对于具有触摸功能的屏幕, 显示输入模块 100可以接收用户在屏幕上通 过点选或者画线的方式输入的目标物体的轮廓。 对于具有普通屏幕的移动终端, 显示 输入模块 100也可以接收用户通过键盘上的按键输入的目标物体的轮廓。 或者, 对于 能够自动识别屏幕上的目标物体的移动终端, 用户只需通过显示输入模块 100点击目 标物体所在的大致区域, 移动终端即能识别出该区域中或者该区域附近的目标物体。 跟踪模块 200: 设置为在移动终端面向目标物体平移的过程中识别跟踪目标物体。 具体的,跟踪模块 200可以根据现有的图像处理中的相关算法对显示输入模块 100 中输入的目标物体进行识别和跟踪, 比如: 利用目标物体与背景的亮度或者颜色差异 较大时, 可以采用图像边缘提取算法, 具体的, 如: 基于 B样条小波的自适应阈值多 尺度边缘提取算法、 结合嵌入可信度的多尺度离散 Canny边缘提取算法、 新的边缘轮 廓提取模型 量子统计可变形模型图像边缘跟踪算法, 还可以采用基于粒子滤波的 图像跟踪算法、 融合结构信息和尺度不变特征变换算法的多信息融合粒子滤波跟踪算 法、 改进的 hausdorff视频目标跟踪方法等算法对目标物体进行识别和跟踪。 记录模块 300: 设置为记录移动终端平移的距离以及移动终端移动前、 后目标物 体的显示宽度之比。 较优地, 还可以将记录模块 300记录的移动终端移动前、 后目标物体的显示宽度 之比, 替换为, 移动终端移动前、 后在目标物体处的屏幕取景宽度之比。 移动终端平 移的过程中, 目标物体始终在移动终端屏幕的取景范围内。 另外, 记录模块 300可以 通过移动终端上配备的加速度感应器获取移动终端平移的距离并记录。 计算模块 400: 设置为基于所述记录模块记录的数据, 计算移动终端与目标物体 之间的距离。 本发明第五实施例, 一种单摄像头测距的方法, 如图 5所示, 本实施例所述系统 的显示输入模块 100、记录模块 300、计算模块 400与第四实施例中对应模块相同, 区 别在于, 本实施例在跟踪模块 200的执行过程中, 还增加了对移动终端面向目标物体 平移的过程的姿态进行监测的监测模块 500, 可以保证测距结果的准确性, 具体如下: 监测模块 500: 设置为对移动终端面向目标物体平移的过程的姿态进行监测, 若 监测到移动终端发生了转动, 则上报测距失败, 若未监测到转动, 则继续调用跟踪模 块 200识别跟踪目标物体。 本发明第六实施例, 一种单摄像头测距的方法, 如图 6所示, 本实施例所述系统 的显示输入模块 100、记录模块 300、计算模块 400与第四实施例中对应模块相同, 区 别在于, 本实施例在跟踪模块 200的执行过程中, 还增加了对移动终端面向目标物体 平移的过程的姿态进行监测的监测模块 500以及进行判断的判断模块 600, 如下: 监测模块 500: 设置为对移动终端面向目标物体平移的过程的姿态进行监测, 若 监测到移动终端发生了转动, 则调用判断模块 600, 若未监测到转动, 则继续调用跟 踪模块 200识别跟踪目标物体。 具体的, 监测模块 500可以通过移动终端上的三轴陀螺仪对移动终端面向目标物 体平移的过程的姿态进行监测。 若移动终端发生转动时, 三轴陀螺仪上报转动的方位 和角速度等数据, 判断模块 600可以根据这些数据进一步判断该转动是否被允许, 因 为在实际应用中, 用户手持移动终端进行测距时, 很容易发生抖动而导致移动终端发 生轻微转动, 但只要不影响测距的结果, 该转动是可以被允许的。 判断模块 600: 设置为判断所述转动是否合法, 若是, 则继续调用所述跟踪模块 200识别跟踪目标物体, 否则上报测距失败; 优选地, 合法的转动包括: 对于摄像头位于移动终端中心的移动终端, 移动终端 在其所在平面内绕其中心的转动; 对于摄像头位于除移动终端中心之外的其他位置的 移动终端, 移动终端中心与目标物体中心的连线垂直于移动终端所在平面时, 移动终 端在其所在平面内绕其中心的转动。 下面基于上述实施例, 结合附图 7、 8、 9、 10介绍一个手机采用单摄像头测距的 应用实例。 图 7是手机的摄像头面向目标物体平移前、 后的相关距离、 比例的变化示意图, 图 8 ( a) 、 (b) 分别是从屏幕观察角度看到的摄像头面向待测物体平移前、 后的目 标物体大小变化及相关比例变化示意图。
从图 7中可以看到, 摄像头从初始位置 A1水平移动到位置 A2, 从而摄像头到
目标物体的距离由 D1变为 D2, 移动距离量 = E1D2, 而目标物体的宽度 L保 持不变, 屏幕通过摄像头在目标物体处的取景宽度由 Wl变为 W2, D1或者 D2就 是要计算的目标物体到摄像头的距离, 下面以计算 D1为例。
首先, 结合图 7, 目标物体的宽度所占屏幕在目标物体处的取景宽度的比例,
在摄像头移动前后将会发生变化, 即 ― ^, "― 。 根据图 7中所示的比例变
D1 1 L i
D2 wi =— : 1 W2 K2 化, 可以得出 又因为 Έ , 所以 。 又因为
Di = X d
K2, 而 2, 所以 成立, g卩 Ki- KS . , 也就是 说, 只需要知道 Kl与 K2的比例, 即可计算出 Dl。
综上所述, 从物理原理上来看, 当摄像头面向目标物体平移的距离量为 d, 在 摄像头移动前后, 目标物体的宽度所占屏幕在目标物体处的取景宽度的比例分别为
Kl、 Κ2时, 则摄像头距离目标物体的距离 Dl, 可由公式 ― KiK2 ' 得出。
在实际操作中, , 结合图 8 (a) 、 (b) , 目标物体的显示宽度所占屏幕宽度 的比例, 在摄像头移动前后也会发生变化, 目标物体的显示宽度由 L1变化为 L2, i =― , Κ2 =― 而手机的屏幕宽度 W是不变的,上述比例 Κ1、Κ2还可以转化为: ^ ,
D1 = X ά
再利用 — Κ2 , 即可计算出 Dl。 因为摄像头位于移动终端上, 相对于目标 物体来说, 摄像头距离目标物体的距离即为移动终端距离目标物体的距离。
下面将结合附图 9对本发明的应用实例的实施流程作进一步的描述: 用户手中有一台智能手机, 该智能手机具有 4.5英寸 HD ( 1280x720) 分辨率 IPS 电容触摸屏、 800万 /130万像素后置 /前置摄像头, 并配备有三轴陀螺仪和加速 度感应器。 此时, 用户希望了解从自己所坐的沙发处到位于正面面前打开的电视机 的大致直线距离。 用户首先进入智能手机的拍照界面, 使用 800万像素摄像头进行拍照, 并在相机 功能菜单中选择进入 "测距拍照模式"。 此时手机屏幕仍然实时显示摄像头所拍下的图 像。 测距拍照模式启动后, 智能手机会启动及初始化三轴陀螺仪和加速度感应器, 进 而感知此时手机的姿态和移动。 如在启动或初始化过程中发现三轴陀螺仪和加速度感 应器工作不正常, 则手机屏幕提示"姿态或移动感应器件启动失败", 从而退出测距拍 照模式, 进入正常拍照模式。 手机屏幕会首先提示在拍摄图像中点选需要测距的目标物体轮廓。 用户接着在手 机屏幕上点选电视屏幕的轮廓, 并确认点选完毕。 由于电视屏幕相较于电视边框及电 视背景墙的明亮差异较大, 所以手机通过对屏幕所显示图像的计算, 确认出易于辨别 和跟踪的电视屏幕轮廓。 手机完成对电视屏幕轮廓的识别及跟踪后, 则会在屏幕上提 示轮廓跟踪完毕。 同时, 手机会计算出该轮廓在屏幕上的显示宽度与手机屏幕宽度的 占比量。 如果手机发现无法辨别和后续跟踪用户所点选的目标物体轮廓, 则将在屏幕 上提示轮廓识别失败,提示用户或者退出测距拍照模式, 或者重新点选目标物体轮廓。 接着, 手机屏幕提示用户将手机向目标物体相向平移, 并同时利用三轴陀螺仪, 监测手机的姿态, 从而保证用户是沿着手机与目标物体之间的水平轴、 面向目标物体 平行移动手机, 由于手机与目标物体均垂直于水平面, 手机与目标物体之间的水平轴 即手机中心与目标物体中心的连线。 如果手机检测到用户没有沿着水平轴向平行移动 手机, 则会在屏幕提示此次测距拍摄失败, 并提示重新开始测距拍照模式。 当用户平移手机时, 手机也会保持对目标物体轮廓的跟踪, 如轮廓跟踪失败, 会 在屏幕提示目标物体的轮廓跟踪失败, 则退出测距拍照模式。 当用户平移手机时, 手机将对加速度传感器所采集到的手机加速度数据进行二次 积分, 从而得出手机的平移距离, 同时, 手机也将跟踪电视屏幕轮廓的变化。 当用户将手机面向电视屏幕沿水平轴向平移一小段距离后, 停止。 手机的加速度 感应器感知到手机的停止状态后, 手机屏幕则提示正在进行测距计算, 同时, 手机会 计算出当前状态下, 电视屏幕轮廓在屏幕上的显示宽度与手机屏幕宽度的比例。 最后, 手机将根据计算出的移动前后电视屏幕轮廓在屏幕上的显示宽度与手机屏 幕宽度的比例数据, 以及计算出的手机平移距离, 得出手机移动之前距离电视屏幕的 距离量, 同理, 也可以得出手机移动之后距离电视屏幕的距离量。 在计算过程中, 手 机屏幕会持续提示正在进行测距计算, 但计算完毕后, 则手机屏幕将提示测距计算完 毕, 并显示手机初始位置或者当前位置距离电视屏幕的距离计算结果。 计算结果显示
3秒后, 手机自动推出测距拍摄模式, 并进入正常拍照模式。 下面结合图 10对本发明的应用实例的实施系统作进一步的描述:
A.图像拍摄及处理模块 10, 即智能手机的摄像头镜头、 镜片、 图像传感器、 数模 转换、 数字图像信号处理器(ISP)及相关机械装置等, 用于将目标物体的图像转化为 数字图像信号的模块;
B.加速度感应模块 20, 即智能手机的加速度传感器及其机械装置、 数模转换器等 器件;
C.姿态感应模块 30, 即智能手机的三轴陀螺仪器及其数模转换器等器件;
D.图像显示及触摸感应模块 40, 即智能手机的显示屏或触摸屏模组、 及其数模转 换器等器件;
E.应用处理模块 50, 即智能手机的应用处理器芯片, 可以对数字图像信号、 加速 度感应信号、 姿态感应信号等信号进行处理, 完成被测物体轮廓的识别、 跟踪, 及姿 态的监控、 位移的计算等, 并能够通过控制图像显示及触摸感应模块 40, 输出对用户 提示语及处理用户的触控操作等。 应用处理模块 50是本应用实例中所述系统的核心模块,它能够控制图像拍摄及处 理模块 10、 加速度感应模块 20、 姿态感应模块 30、 图像显示及触摸感应模块 40, 并 接收与处理来自于上述四个模块的信号。 图像显示及触摸感应模块 40 可以实现通过移动终端的摄像头采集含有目标物体 的画面并显示在屏幕上、 以及接收用户对目标物体轮廓的输入的功能。 应用处理模块 50与图像拍摄及处理模块 10的交互可以实现识别跟踪目标物体以及移动终端移动前、 后目标物体的显示宽度之比的功能, 应用处理模块 50与加速度感应模块 20的交互可 以实现记录移动终端平移的距离的功能,姿态感应模块 30可以实现对移动终端面向目 标物体平移的过程的姿态进行监测的功能,进一步的,应用处理模块 50还可以对姿态 感应模块 30上报的数据进行处理以实现对移动终端的转动是否合法进行判断的功能, 最终, 由应用处理模块 50基于记录的数据, 计算移动终端与目标物体之间的距离。 应用处理模块 50可以与智能手机的基带处理芯片进行合并,使智能手机的基带处 理芯片也具有该应用处理模块的相关功能。 本发明所述单摄像头测距的方法及系统, 移动终端在摄像头拍照模式下, 目标物 体的外边缘被持续识别跟踪, 用户将移动终端面向目标物体平移, 移动终端根据目标 物体在屏幕上显示宽度的变化或者移动终端屏幕取景宽度的变化、 以及移动终端平移 的距离, 计算出移动终端与目标物体间的距离。 整个测距的过程基于现有的移动终端 图像处理及运动感知功能来完成, 可以在不增配光学器件的情况下基于移动终端实现 单摄像头距离测量。 另外, 增加对移动终端平移时的姿态监控步骤, 可以进一步保证 测距的准确性。 通过具体实施方式的说明, 应当可对本发明为达成预定目的所采取的技术手段及 功效得以更加深入且具体的了解, 然而所附图示仅是提供参考与说明之用, 并非用来 对本发明加以限制。

Claims

权 利 要 求 书 、 一种单摄像头测距的方法, 包括:
显示输入步骤: 移动终端的摄像头采集含有目标物体的画面并显示在屏幕 上, 接收用户对目标物体的选择;
跟踪步骤: 在移动终端面向目标物体平移的过程中识别跟踪目标物体; 记录步骤: 记录移动终端平移的距离以及移动终端移动前、 后目标物体的 显示宽度之比;
计算步骤: 基于所述记录步骤记录的数据, 计算移动终端与目标物体之间 的距离。 、 根据权利要求 1所述的单摄像头测距的方法, 其中, 所述方法, 还包括: 监测步骤: 对移动终端面向目标物体平移的过程的姿态进行监测, 若监测 到移动终端发生了转动, 则上报测距失败, 若未监测到转动, 则继续执行跟踪 步骤。 、 根据权利要求 2所述的单摄像头测距的方法, 其中, 所述方法, 还包括: 判断步骤: 当所述监测步骤监测到移动终端发生了转动时, 判断所述转动 是否合法, 若是, 则继续执行跟踪步骤, 否则上报测距失败;
所述合法的转动, 包括: 对于摄像头位于移动终端中心的移动终端, 移动 终端绕在其所在平面内其中心的转动; 对于摄像头位于除移动终端中心之外的 其他位置的移动终端, 移动终端中心与目标物体中心的连线垂直于移动终端所 在平面时, 移动终端在其所在平面内绕其中心的转动。 、 根据权利要求 2所述的单摄像头测距的方法, 其中, 在所述监测步骤中, 通过 移动终端上的三轴陀螺仪对移动终端面向目标物体平移的过程的姿态进行监
、 根据权利要求 1所述的方法, 其中, 在所述记录步骤中, 通过移动终端上的加 速度感应器获取移动终端平移的距离并记录。 、 一种单摄像头测距的系统, 位于移动终端上, 所述系统包括: 显示输入模块: 设置为通过移动终端的摄像头采集含有目标物体的画面并 显示在屏幕上; 接收用户对目标物体的选择;
跟踪模块: 设置为在移动终端面向目标物体平移的过程中识别跟踪目标物 体;
记录模块: 设置为记录移动终端平移的距离以及移动终端移动前、 后目标 物体的显示宽度之比;
计算模块: 设置为基于所述记录模块记录的数据, 计算移动终端与目标物 体之间的距离。 、 根据权利要求 6所述的单摄像头测距的系统, 其中, 所述系统, 还包括: 监测模块: 设置为对移动终端面向目标物体平移的过程的姿态进行监测, 若监测到移动终端发生了转动, 则上报测距失败, 若未监测到转动, 则继续调 用所述跟踪模块识别跟踪目标物体。 、 根据权利要求 7所述的单摄像头测距的系统, 其中, 所述系统还包括:
判断模块: 设置为当所述监测模块监测到移动终端发生了转动时, 判断所 述转动是否合法, 若是, 则继续调用所述跟踪模块识别跟踪目标物体, 否则上 报测距失败;
所述合法的转动, 包括: 对于摄像头位于移动终端中心的移动终端, 移动 终端在其所在平面内绕其中心的转动; 对于摄像头位于除移动终端中心之外的 其他位置的移动终端, 移动终端中心与目标物体中心的连线垂直于移动终端所 在平面时, 移动终端在其所在平面内绕其中心的转动。 、 根据权利要求 7所述的单摄像头测距的系统, 其中, 所述监测模块, 设置为通 过移动终端上的三轴陀螺仪对移动终端面向目标物体平移的过程的姿态进行监
根据权利要求 6所述的单摄像头测距的系统, 其中, 所述记录模块, 设置为通 过移动终端上的加速度感应器获取移动终端平移的距离并记录。
PCT/CN2013/080563 2012-11-30 2013-07-31 一种单摄像头测距的方法和系统 WO2013174354A2 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/648,690 US20150310619A1 (en) 2012-11-30 2013-07-31 Single-Camera Distance Ranging Method and System
EP13794121.7A EP2927634B1 (en) 2012-11-30 2013-07-31 Single-camera ranging method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210504460.9 2012-11-30
CN201210504460.9A CN103017730B (zh) 2012-11-30 2012-11-30 一种单摄像头测距的方法和系统

Publications (2)

Publication Number Publication Date
WO2013174354A2 true WO2013174354A2 (zh) 2013-11-28
WO2013174354A3 WO2013174354A3 (zh) 2014-01-16

Family

ID=47966609

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/080563 WO2013174354A2 (zh) 2012-11-30 2013-07-31 一种单摄像头测距的方法和系统

Country Status (4)

Country Link
US (1) US20150310619A1 (zh)
EP (1) EP2927634B1 (zh)
CN (1) CN103017730B (zh)
WO (1) WO2013174354A2 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016183723A1 (zh) * 2015-05-15 2016-11-24 华为技术有限公司 一种测量的方法及终端

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103017730B (zh) * 2012-11-30 2015-04-01 中兴通讯股份有限公司 一种单摄像头测距的方法和系统
EP3828496A1 (en) * 2013-04-08 2021-06-02 SNAP Inc. Distance estimation using multi-camera device
CN103327385B (zh) * 2013-06-08 2019-03-19 上海集成电路研发中心有限公司 基于单一图像传感器的距离识别方法及装置
CN103345301B (zh) * 2013-06-18 2016-08-10 华为技术有限公司 一种深度信息获取方法和装置
CN103399629B (zh) * 2013-06-29 2017-09-19 华为技术有限公司 获取手势屏幕显示坐标的方法和装置
CN103366188B (zh) * 2013-07-08 2017-07-07 中科创达软件股份有限公司 一种基于拳头检测作为辅助信息的手势跟踪方法
CN103398696B (zh) * 2013-07-15 2015-09-16 深圳市金立通信设备有限公司 一种终端摄像头测距方法及终端
CN103440033B (zh) * 2013-08-19 2016-12-28 中国科学院深圳先进技术研究院 一种基于徒手和单目摄像头实现人机交互的方法和装置
TWI537580B (zh) * 2013-11-26 2016-06-11 財團法人資訊工業策進會 定位控制方法
CN103856869A (zh) * 2014-03-12 2014-06-11 深圳市中兴移动通信有限公司 音效处理方法和摄像装置
CN104469001A (zh) * 2014-12-02 2015-03-25 王国忠 一种具有拍照防抖功能的手机及其在拍照中的防抖方法
CN104536560B (zh) * 2014-12-10 2017-10-27 广东欧珀移动通信有限公司 一种调节终端字体大小的方法及装置
NO343441B1 (en) * 2015-02-20 2019-03-11 FLIR Unmanned Aerial Systems AS Depth measurement system
CN106291521A (zh) * 2016-07-29 2017-01-04 广东欧珀移动通信有限公司 基于mems移动的测距方法、装置和移动终端
CN106289160A (zh) * 2016-07-29 2017-01-04 广东欧珀移动通信有限公司 测距方法和装置
CN106405531B (zh) * 2016-09-05 2019-05-07 南京理工大学 基于图像处理技术的被动毫米波辐射成像系统测距方法
JP6606234B1 (ja) 2018-07-13 2019-11-13 Dmg森精機株式会社 測定装置
CN110958416A (zh) * 2019-12-06 2020-04-03 佳讯飞鸿(北京)智能科技研究院有限公司 目标跟踪系统和远程跟踪系统
CN113128516B (zh) * 2020-01-14 2024-04-05 北京京东乾石科技有限公司 边缘提取的方法和装置
CN111473766A (zh) * 2020-04-01 2020-07-31 长沙艾珀科技有限公司 一种智能拍照测距离的方法
CN112577475A (zh) * 2021-01-14 2021-03-30 天津希格玛微电子技术有限公司 一种能够有效降低功耗的视频测距方法
CN114840086A (zh) * 2022-05-10 2022-08-02 Oppo广东移动通信有限公司 一种控制方法、电子设备及计算机存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000121354A (ja) * 1998-10-16 2000-04-28 Japan Aviation Electronics Industry Ltd 距離計測方法
WO2008155961A1 (ja) * 2007-06-21 2008-12-24 Konica Minolta Holdings, Inc. 測距装置
CN101074876A (zh) * 2007-06-26 2007-11-21 北京中星微电子有限公司 一种自动测量距离的方法及装置
JP2009180536A (ja) * 2008-01-29 2009-08-13 Omron Corp 画像処理装置、画像処理方法、およびプログラム
KR101284798B1 (ko) * 2009-12-08 2013-07-10 한국전자통신연구원 단일 카메라 영상 기반의 객체 거리 및 위치 추정 장치 및 방법
CN101858742A (zh) * 2010-05-27 2010-10-13 沈阳理工大学 一种基于单相机的定焦测距方法
CN102175228B (zh) * 2011-01-27 2012-09-05 北京播思软件技术有限公司 一种基于移动终端的测距方法
CN102706319B (zh) * 2012-06-13 2015-05-13 深圳泰山在线科技有限公司 一种基于图像拍摄的距离标定和测量方法及系统
US9025859B2 (en) * 2012-07-30 2015-05-05 Qualcomm Incorporated Inertial sensor aided instant autofocus
CN103017730B (zh) * 2012-11-30 2015-04-01 中兴通讯股份有限公司 一种单摄像头测距的方法和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016183723A1 (zh) * 2015-05-15 2016-11-24 华为技术有限公司 一种测量的方法及终端
KR20170128779A (ko) 2015-05-15 2017-11-23 후아웨이 테크놀러지 컴퍼니 리미티드 측정 방법 및 단말기
CN107532881A (zh) * 2015-05-15 2018-01-02 华为技术有限公司 一种测量的方法及终端
US10552971B2 (en) 2015-05-15 2020-02-04 Huawei Technologies Co., Ltd. Measurement method, and terminal

Also Published As

Publication number Publication date
EP2927634A2 (en) 2015-10-07
US20150310619A1 (en) 2015-10-29
CN103017730A (zh) 2013-04-03
WO2013174354A3 (zh) 2014-01-16
CN103017730B (zh) 2015-04-01
EP2927634B1 (en) 2019-02-27
EP2927634A4 (en) 2016-01-20

Similar Documents

Publication Publication Date Title
WO2013174354A2 (zh) 一种单摄像头测距的方法和系统
US10242454B2 (en) System for depth data filtering based on amplitude energy values
US8179449B2 (en) Portable electronic apparatus including a display and method for controlling display content based on movement of the display and user position
JP6102648B2 (ja) 情報処理装置及び情報処理方法
TWI531929B (zh) 基於影像來識別觸控表面的目標接觸區域之技術
TWI442328B (zh) 在影像捕捉裝置中之陰影及反射識別
US9900500B2 (en) Method and apparatus for auto-focusing of an photographing device
EP2991027B1 (en) Image processing program, image processing method and information terminal
CN112005548B (zh) 生成深度信息的方法和支持该方法的电子设备
WO2017126172A1 (ja) 情報処理装置、情報処理方法、及び記録媒体
US8614694B2 (en) Touch screen system based on image recognition
JP2015526927A (ja) カメラ・パラメータのコンテキスト駆動型調整
WO2014199786A1 (ja) 撮影システム
US20150009119A1 (en) Built-in design of camera system for imaging and gesture processing applications
WO2017221741A1 (ja) 画像処理装置、画像処理方法、イメージセンサ、情報処理装置、並びにプログラム
WO2017147748A1 (zh) 一种可穿戴式系统的手势控制方法以及可穿戴式系统
CN111724412A (zh) 确定运动轨迹的方法、装置及计算机存储介质
WO2018198499A1 (ja) 情報処理装置、情報処理方法、及び記録媒体
TW201112742A (en) Electronic device, control method, program, and image capturing system
JP2015118442A (ja) 情報処理装置、情報処理方法およびプログラム
JP2011217202A (ja) 画像取得装置
EP4026092A1 (en) Scene lock mode for capturing camera images
JP7293362B2 (ja) 撮影方法、装置、電子機器及び記憶媒体
CN112995502B (zh) 图像处理方法、装置和电子设备
TW201419087A (zh) 微體感偵測模組及其微體感偵測方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13794121

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 14648690

Country of ref document: US

Ref document number: 2013794121

Country of ref document: EP